Abstract: We demonstrate a 40 Gb/s self-synchronizing, all-optical packet
clock recovery circuit designed for efficient packet-mode traffic. The circuit
locks instantaneously and enables sub-nanosecond packet spacing due to the
low clock persistence time. A low-Q Fabry-Perot filter is used as a passive
resonator tuned to the line-rate that generates a retimed clock-resembling
signal. As a reshaping element, an optical power-limiting gate is
incorporated to perform bitwise pulse equalization. Using two preamble
bits, the clock is captured instantly and persists for the duration of the data
packet increased by 16 bits. The performance of the circuit suggests its
suitability for future all-optical packet-switched networks with reduced
transmission overhead and fine network granularity.
Abstract: In this paper, we survey the current state-of-the-art in middleware and systems for Wireless Sensor Networks (WSN). We provide a discussion on the definition of WSN middleware, design issues associated with it, and the taxonomies commonly used to categorize it. We also present a categorization of a number of such middleware platforms, using middleware functionalities and challenges which we think will play a crucial role in developing software for WSN in the near future. Finally, we provide a short discussion on WSN middleware trends.
Abstract: The efficient use of resources and the lossless transfer of data bursts in future optical
networks requires the accurate knowledge of the available bandwidth for each network
link. Such information is important in monitoring congestions and can be used by
appropriate load balancing and congestion avoidance mechanisms. In this paper we
propose a mechanism for monitoring and subsequently managing bandwidth resources,
using the Simple NetworkManagement Protocol (SNMP). In the proposed mechanism,
link bandwidth availability is not a scalar parameter, but a function of time that records
the future utilization of the link. For every output port, each agent-node maintains a
simple data structure in the form of a table that records the utilization profile of that
outgoing link. With the addition of new objects in the Management Information Base
(MIB) of each agent-node and proper synchronization, SNMP can be used to update
and retrieve the reservations made on the links in order to obtain an instant picture of
the network traffic situation.
Abstract: Smart Dust is comprised of a vast number of ultra-small fully autonomous computing and communication devices, with very restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Smart Dust can be very useful in practice i.e. in the local detection of a remote crucial event and the propagation of data reporting its realization to a control center.
In this work, we have implemented and experimentally evaluated four protocols (PFR, LTP and two variations of LTP which we here introduce) for local detection and propagation in smart dust networks, under new, more general and realistic modelling assumptions. We comparatively study, by using extensive experiments, their behavior highlighting their relative advantages and disadvantages. All protocols are very successful. In the setting we considered here, PFR seems to be faster while the LTP based protocols are more energy efficient.
Abstract: Smart Dust is comprised of a vast number of ultra-small fully autonomous computing and communication devices, with very restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Smart Dust can be very useful in practice i.e. in the local detection of a remote crucial event and the propagation of data reporting its realization to a control center.
In this work, we have implemented and experimentally evaluated four protocols (PFR, LTP and two variations of LTP which we here introduce) for local detection and propagation in smart dust networks, under new, more general and realistic modelling assumptions. We comparatively study, by using extensive experiments, their behavior highlighting their relative advantages and disadvantages. All protocols are very successful. In the setting we considered here, PFR seems to be faster while the LTP based protocols are more energy efficient.
Abstract: In this work we propose a new unified PON
RAN architecture for LTE mobile backhaul networks,
employing ringbased WDM PONs. The proposed
architecture supports dynamic setup of virtual circ
uits for interbase station communication, over a dedicated
λ
LAN channel. The reservation mechanism is arbitrated by the OLT, which also monitors the traffic imbalances of downstream channels. The proposed architecture also supports load balancing, by dynamically reallocatin
g and sharing the capacity of the downstream wavelengths.
Abstract: Large-scale sensor networks, monitoring an environment at close range with high spatial and temporal resolutions are expected to play an important role in various applications, e.g., assessing the ``health'' of machines; environmental, medical, food-safety, and habitat monitoring; inventory control, building automation, etc. Ensuring the security of these complex and yet resource-constrained systems has emerged as one of the most pressing challenges for researchers. In this paper (i) we present the major threats and some characteristic countermeasures, (ii) we propose a way to classify existing systems for intrusion detection in wireless sensor networks and (iii) we present a new approach for decentralized energy efficient intrusion detection that can be used to improve security from both external and internal adversaries.
Abstract: Core networks of the future will have a
translucent and eventually transparent optical
structure. Ultra-high-speed end-to-end connectiv-
ity with high quality of service and high reliability
will be realized through the exploitation of opti-
mized protocols and lightpath routing algorithms.
These algorithms will complement a flexible con-
trol and management plane integrated in the
proposed solution. Physical layer impairments
and optical performance are monitored and
incorporated in impairment-aware lightpath rout-
ing algorithms. These algorithms will be integrat-
ed into a novel dynamic network planning tool
that will consider dynamic traffic characteristics,
a reconfigurable optical layer, and varying physi-
cal impairment and component characteristics.
The network planning tool along with extended
control planes will make it possible to realize the
vision of optical transparency. This article pre-
sents a novel framework that addresses dynamic
cross-layer network planning and optimization
while considering the development of a future
transport network infrastructure.
Abstract: We here present the Forward Planning Situated Protocol (FPSP), for scalable, energy efficient and fault tolerant data propagation in situated wireless sensor networks. To deal with the increased complexity of such deeply networked sensor systems, instead of emphasizing on a particular aspect of the services provided, i.e. either for low-energy periodic, or low-latency event-driven, or high-success query-based sensing, FPSP uses two novel mechanisms that allow the network operator to adjust the performance of the protocol in terms of energy, latency and success rate on a per-task basis. We emphasize on distributedness, direct or indirect interactions among relatively simple agents, flexibility and robustness.
The protocol operates by employing a series of plan & forward phases through which devices self-organize into forwarding groups that propagate data over discovered paths. FPSP performs a limited number of long range, high power data transmissions to collect information regarding the neighboring devices. The acquired information, allows to plan a (parameterizable long by {\"e}) sequence of short range, low power transmissions between nearby particles, based on certain optimization criteria. All particles that decide to respond (based on local criteria) to these long range transmissions enter the forwarding phase during which information is propagated via the acquired plan. Clearly, the duration of the forwarding phases is characterized by the parameter {\"e}, the transmission medium and the processing speed of the devices. In fact the parameter {\"e} provides a mechanism to adjust the protocol performance in terms of the latency--energy trade-off. By reducing {\"e} the latency is reduced at the cost of spending extra energy, while by increasing {\"e}, the energy dissipation is reduced but the latency is increased.
To control the success rate--energy trade-off, particles react locally on environment and context changes by using a set of rules that are based on response thresholds that relate individual-level plasticity with network-level resiliency, motivated by the nature-inspired method for dividing labor, a metaphor of social insect behavior for solving problems [1]. Each particle has an individual response threshold {\`E} that is related to the "local" density (as observed by the particle, [2]); particles engage in propagation of events when the level of the task-associated stimuli exceeds their thresholds. Let s be the intensity of a stimulus associated with a particular sensing task, set by the human authorities. We adopt the response function T{\`e}(s) = snover sn + {\`e}n, the probability of performing the task as a function of s, where n > 1 determines the steepness of the threshold. Thus, when {\`e} is small (i.e. the network is sparse) then the response probability increases; when s increases (i.e. for critical sensing tasks) the response probability increases as well.
This role-based approach where a selective number of devices do the high cost planning and the rest of the network operates in a low cost state leads to systems that have increased energy efficiency and high fault-tolerance since these long range planning phases allow to bypass obstacles (where no sensors are available) or faulty sensors (that have been disabled due to power failure or other natural events).
Abstract: Future Grid Networks should be able to provide Quality
of Service (QoS) guarantees to their users. In this work
we propose a framework for Grid Networks that provides
deterministic delay guarantees to its Guaranteed Service
(GS) users and best effort service to its Best Effort (BE)
users. The proposed framework is theoretically and experimentally
analyzed. We also define four types of computational
resources based on the type of users (GS, BE) these
resources serve and the priority they give them. We implement
the proposed QoS framework for Grids and verify that
it not only satisfies the delay guarantees given to GS users,
but also improves performance in terms of deadlines missed
and resource use. In our simulations, data from a real Grid
Network are used, validating in this way the appropriateness
and usefulness of the proposed framework.
Abstract: The efficient use of resources and the lossless transfer of data bursts in future optical
networks requires the accurate knowledge of the available bandwidth for each network
link. Such information is important in monitoring congestions and can be used by
appropriate load balancing and congestion avoidance mechanisms. In this paper we
propose a mechanism for monitoring and subsequently managing bandwidth resources,
using the Simple NetworkManagement Protocol (SNMP). In the proposed mechanism,
link bandwidth availability is not a scalar parameter, but a function of time that records
the future utilization of the link. For every output port, each agent-node maintains a
simple data structure in the form of a table that records the utilization profile of that
outgoing link. With the addition of new objects in the Management Information Base
(MIB) of each agent-node and proper synchronization, SNMP can be used to update
and retrieve the reservations made on the links in order to obtain an instant picture of
the network traffic situation.
Abstract: Motivated by the problem of supporting energy-efficient broadcasting in ad hoc wireless networks, we study the Minimum
Energy Consumption Broadcast Subgraph (MECBS) problem. We present the first logarithmic approximation algorithm for the
problem which uses an interesting reduction to Node-Weighted Connected Dominating Set.
Abstract: In mobile ad-hoc networks (MANETs), the mobility of the nodes is a complicating factor that significantly affects the effectiveness and performance of the routing protocols. Our work builds upon recent results on the effect of node mobility on the performance of available routing strategies (i.e.~path based, using support) and proposes a protocol framework that exploits the usually different mobility rates of the nodes by adapting the routing strategy during execution. We introduce a metric for the relative mobility of the nodes, according to which the nodes are classified into mobility classes. These mobility classes determine, for any pair of an origin and destination, the routing technique that best corresponds to their mobility properties. Moreover, special care is taken for nodes remaining almost stationary or moving with high (relative) speeds. Our key design goal is to limit the necessary implementation changes required to incorporate existing routing protocols in to our framework. We provide extensive evaluation of the proposed framework, using a well-known simulator (NS2). Our first findings demonstrate that the proposed framework improves, in certain cases, the performance of the existing routing protocols.
Abstract: In ad-hoc mobile networks (MANET), the mobility of the nodes is a complicating factor that significantly affects the effectiveness and performance of the routing protocols. Our work builds upon the recent results on the effect of node mobility on the performance of available routing strategies (i.e.~path based, using support) and proposes a protocol framework that exploits the usually different mobility rates of the nodes by adopting the routing strategy during execution. We introduce a metric for the relative mobility of the nodes, according to which the nodes are classified into mobility classes. These mobility classes determine, for any pair of origin and destination, the routing technique that best corresponds to their mobility properties. Moreover, special care is taken for nodes remaining almost stationary or moving with high (relative) speeds. Our key design goal is to limit the necessery implementation changes required to incorporate existing routing protocols in our framework. We provide extensive evaluation of the proposed framework, using a well-known simulator (NS2). Our first findings demonstrate that the proposed framework improves, in certain cases, the performance of the existing routing protocols.
Abstract: We propose a simple obstacle model to be used while simulating wireless sensor networks. To the best of our knowledge, this is the first time such an integrated and systematic obstacle model for these networks has been proposed. We define several types of obstacles that can be found inside the deployment area of a wireless sensor network and provide a categorization of these obstacles based on their nature (physical and communication obstacles, i.e. obstacles that are formed out of node distribution patterns or have physical presence, respectively), their shape and their change of nature over time. We make an eXtension to a custom-made sensor network simulator (simDust) and conduct a number of simulations in order to study the effect of obstacles on the performance of some representative (in terms of their logic) data propagation protocols for wireless sensor networks. Our findings confirm that obstacle presence has a significant impact on protocol performance, and also that different obstacle shapes and sizes may affect each protocol in different ways. This provides an insight into how a routing protocol will perform in the presence of obstacles and highlights possible protocol shortcomings. Moreover, our results show that the effect of obstacles is not directly related to the density of a sensor network, and cannot be emulated only by changing the network density.
Abstract: We design and implement a multicost impairment- aware routing and wavelength assignment algorithm for online traffic. In transparent optical networks the quality of a transmission degrades due to physical layer impairments. To serve a connection, the proposed algorithm finds a path and a free wavelength (a lightpath) that has acceptable signal quality performance by estimating a quality of transmission measure, called the Q factor. We take into account channel utilization in the network, which changes as new connections are established or released, in order to calculate the noise variances that correspond to physical impairments on the links. These, along with the time invariant eye impairment penalties of all candidate network paths, form the inputs to the algorithm. The multicost algorithm finds a set of so called non-dominated Q paths from the given source to the given destination. Various objective functions are then evaluated in order to choose the optimal lightpath to serve the connection. The proposed algorithm combines the strength of multicost optimization with low execution time, making it appropriate for serving online connections.
Abstract: We present a new dynamic graph structure specifically suited
for large-scale transportation networks that provides simultaneously three
unique features: compactness, agility and dynamicity. We demonstrate
its practicality and superiority by conducting an experimental study for
shortest route planning in large-scale European and US road networks
with a few dozen millions of nodes and edges. Our approach is the first
one that concerns the dynamic maintenance of a large-scale graph with
ordered elements using a contiguous memory part, and which allows an
arbitrary online reordering of its elements.
Abstract: Smart Dust is a special case of wireless sensor networks, comprised of a vast number of ultra-small fully autonomous computing, communication and sensing devices, with very restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Smart Dust can be very useful in practice, i.e. in the local detection of remote crucial events and the propagation of data reporting their realization to a control center.
In this paper, we propose a new energy efficient and fault tolerant protocol for data propagation in smart dust networks, the Variable Transmission Range Protocol (VTRP). The basic idea of data propagation in VTRP is the varying range of data transmissions, i.e. we allow the transmission range to increase in various ways. Thus, data propagation in our protocol exhibits high fault-tolerance (by bypassing obstacles or faulty sensors) and increases network lifetime (since critical sensors, i.e. close to the control center are not overused). As far as we know, it is the first time varying transmission range is used.
We implement the protocol and perform an extensive experimental evaluation and comparison to a representative protocol (LTP) of several important performance measures with a focus on energy consumption. Our findings indeed demonstrate that our protocol achieves significant improvements in energy efficiency and network lifetime.
Abstract: In this work we propose a new energy efficient and fault tolerant protocol for data propagation in wireless sensor networks, the Variable Transmission Range Protocol VTRP. The basic idea of data propagation in VTRP is the varying range of data transmissions, ie. we allow the transmission range to increase in various ways. Thus data propagation in our protocol exhibits high fault-tolerance (by bypassing obstacles or faulty sensors) and increases network lifetime (since critical sensors, ie. close to the control center are not overused). As far as we know, it is the first time varying transmission range is used.
We implement the protocol and perform an extensive experimental evaluation and comparison to a representative protocol (LTP) of several important performance measures with a focus on energy consumption. Our findings indeed demonstrate that our protocol achieves significant improvements in energy efficiency and network lifetime.
Abstract: In this paper, we present a new hybrid optical burst switch architecture (HOBS) that takes advantage of the pre-transmission idle
time during lightpath establishment. In dynamic circuit switching (wavelength routing) networks, capacity is immediately hardreserved
upon the arrival of a setup message at a node, but it is used at least a round-trip time delay later. This waste of resources
is significant in optical multi-gigabit networks and can be used to transmit traffic of a lower class of service in a non-competing
way. The proposed hybrid OBS architecture, takes advantage of this idle time to transmit one-way optical bursts of a lower class of
service, while high priority data explicitly requests and establishes end-to-end lightpaths. In the proposed scheme, the two control
planes (two-way and one-way OBS reservation) are merged, in the sense that each SETUP message, used for the two-way lightpath
establishment, is associated with one-way burst transmission and therefore it is modified to carry routing and overhead information
for the one-way traffic as well. In this paper, we present the main architectural features of the proposed hybrid scheme and further
we assess its performance by conducting simulation experiments on the NSF net backbone topology. The extensive network study
revealed that the proposed hybrid architecture can achieve and sustain an adequate burst transmission rate with a finite worst case
delay.
Abstract: In this work we present the basic concepts in the architecture of a peer-to-peer environment for monitoring multiple wireless sensor networks, called ShareSense. ShareSense, which is currently under development, uses JXTA as a peer-to-peer substrate. We demonstrate its basic functionalities using a simple application scenario, which utilizes multiple disparate wireless sensor networks. This application scenario involves monitoring of such networks using a separate management environment and a custom application GUI, as well as using Google Earth as an additional user interface.
Abstract: In this work we present the architecture and implementation of WebDust, a software platform for managing multiple, heterogeneous (both in terms of software and hardware), geographically disparate sensor networks. We describe in detail the main concepts behind its design, and basic aspects of its implementation, including the services provided to end-users and developers. WebDust uses a peer-to-peer substrate, based on JXTA, in order to unify multiple sensor networks installed in various geographic areas. We aim at providing a software framework that will permit developers to deal with the new and critical aspects that networks of sensors and tiny devices bring into global computing, and to provide a coherent set of high level services, design rules and technical recommendations, in order to be able to develop the envisioned applications of global sensor networks. Furthermore, we give an overview of a deployed distributed testbed, consisting of a total 56 nodes and describing in more detail two specific testbed sites and the integration of the related software and hardware technologies used for its operation with our platform. Finally, we describe the design and implementation of an interface option provided to end-users, based on the popular Google Earth application.
Abstract: We propose a priority-based balanced routing scheme, called the priority STAR routing scheme, which leads to optimal throughput and average delay at the same time for random broadcasting and routing. In particular, the average reception delay for random broadcasting required in n1timesn2times...timesnd tori with ni=O(1), n-ary d-cubes with n=O(1), or d-dimensional hypercubes is O(d+1/(1-rho)). We also study the case where multiple communication tasks for random 1-1 routing and/or random broadcasting are executed at the same time. When a constant fraction of the traffic is contributed by broadcast requests, the average delay for random 1-1 routing required in any d-dimensional hypercube, any n-ary d-cube with n = O(1), and most n1timesn2times...timesnd tori with ni=O(1) are O(d) based on priority STAR. Our simulation results show that the priority-based balanced routing scheme considerably outperform the best previous routing schemes for these networks
Abstract: We study the problem of data propagation in sensor networks,
comprised of a large number of very small and low-cost nodes,
capable of sensing, communicating and computing. The distributed
co-operation of such nodes may lead to the accomplishment of large
sensing tasks, having useful applications in practice. We present
a new protocol for data propagation towards a control center
(``sink") that avoids flooding by probabilistically favoring
certain (``close to optimal") data transmissions.
This protocol is very simple to implement in sensor devices
and operates under total absence
of co-ordination between sensors. We consider a network model of randomly deployed sensors of sufficient density.
As shown by a geometry analysis,
the protocol is correct, since it always propagates data
to the sink, under ideal network conditions (no failures). Using
stochastic processes, we show that the protocol is very energy efficient. Also, when part of the network is inoperative, the
protocol manages to propagate data very close to the sink, thus in
this sense it is robust. We finally present and discuss
large-scale experimental findings validating the analytical
results.
Abstract: We study the problem of data propagation in sensor networks,
comprised of a large number of very small and low-cost nodes,
capable of sensing, communicating and computing. The distributed
co-operation of such nodes may lead to the accomplishment of large
sensing tasks, having useful applications in practice. We present a new protocol for data propagation towards a control center ("sink") that avoids flooding by probabilistically favoring certain ("close to optimal") data transmissions. Motivated by certain applications and also as a starting point for a rigorous analysis, we study here lattice-shaped sensor networks. We however show that this lattice shape emerges even in randomly deployed sensor networks of sufficient sensor density. Our work is inspired and builds upon the directed diffusion paradigm.
This protocol is very simple to implement in sensor devices, uses only local information and operates under total absence of co-ordination between sensors. We consider a network model of randomly deployed sensors of sufficient density. As shown by a geometry analysis, the protocol is correct, since it always propagates data to the sink, under ideal network conditions (no failures). Using stochastic processes, we show that the protocol is very energy efficient. Also, when part of the network is inoperative, the protocol manages to propagate data very close to the sink, thus in this sense it is robust. We finally present and discuss large-scale experimental findings validating the analytical results.
Abstract: We propose a MAC protocol for mobile ad hoc networks that
uses power control for the RTS/CTS and DATA frame
transmissions in order to improve energy and capacity
utilization efficiency. Unlike IEEE 802.11, in our scheme the
RTS frames are not sent using the maximum transmission
power to silence neighbouring nodes, and the CTS frames do
not silence all receiving nodes to the same degree. In contrast,
the transmission power of the RTS frames follows a slow
start principle, while the CTS frames, which are sent at
maximum transmission power, prevent the neighbouring
nodes from transmitting their DATA frames with power more
than a computed threshold, while allowing them to transmit at
power levels less than that threshold. This is done by
including in the RTS and the CTS frames additional
information, such as the power of the transmissions, and the
interference tolerance of the nodes. Moreover the DATA
frames are sent at the minimum required transmission power
increased by a small margin to ensure connectivity with the
intended receiver, so as to cause minimal interference to
neighbouring nodes and allow for future interference to be
added to the receiver of the DATA frames. The power to be
used by the transmitter is computed by the recipient of the
RTS frame and is included in the CTS frame. It is expected
that a network with such a power management scheme would
achieve a better throughput performance and more power
savings than a network without such a scheme.
Abstract: We propose a MAC protocol for mobile ad hoc networks that
uses power control for the RTS/CTS and DATA frame
transmissions in order to improve energy and capacity
utilization efficiency. Unlike IEEE 802.11, in our scheme the
RTS frames are not sent using the maximum transmission
power to silence neighbouring nodes, and the CTS frames do
not silence all receiving nodes to the same degree. In contrast,
the transmission power of the RTS frames follows a slow
start principle, while the CTS frames, which are sent at
maximum transmission power, prevent the neighbouring
nodes from transmitting their DATA frames with power more
than a computed threshold, while allowing them to transmit at
power levels less than that threshold. This is done by
including in the RTS and the CTS frames additional
information, such as the power of the transmissions, and the
interference tolerance of the nodes. Moreover the DATA
frames are sent at the minimum required transmission power
increased by a small margin to ensure connectivity with the
intended receiver, so as to cause minimal interference to
neighbouring nodes and allow for future interference to be
added to the receiver of the DATA frames. The power to be
used by the transmitter is computed by the recipient of the
RTS frame and is included in the CTS frame. It is expected
that a network with such a power management scheme would
achieve a better throughput performance and more power
savings than a network without such a scheme.
Abstract: In this paper we present a platform for developing mobile, locative and collaborative distributed games comprised of small programmable object technologies (e.g., wireless sensor networks) and traditional networked processors.
The platform is implemented using a combination of JAVA
Standard and Mobile editions, targeting also mobile phones
that have some kind of sensors installed. We briefly present
the architecture of our platform and demonstrate its capabilities by reporting two pervasive multiplayer games. The key
characteristic of these games is that players interact with each
other and their surrounding environment by moving, running
and gesturing as a means to perform game related actions, using small programmable object technologies.
Abstract: This paper presents an overview of Quality of Service (QoS) differentiation mechanisms proposed for Optical Burst Switching (OBS) networks. OBS has been proposed to couple the benefits of both circuit and packet switching for the “on demand” use of capacity in the future optical Internet. In such a case, QoS support imposes some important challenges before this technology is deployed. This paper takes a broader view on QoS, including QoS differentiation not only at the burst but also at the transport levels for OBS networks. A classification of existing QoS differentiation mechanisms for OBS is given and their efficiency and complexity are comparatively discussed. We provide numerical examples on how QoS differentiation with respect to burst loss rate and transport layer throughput can be achieved in OBS networks.
Abstract: Designing wireless sensor networks is inherently complex; many aspects such as energy efficiency, limited resources, decentralized collaboration, fault tolerance have to be tackled. To be effective and to produce applicable results, fundamental research has to be tested, at least as a proof-of-concept, in large scale environments, so as to assess the feasibility of the new concepts, verify their large scale effects (not only at technological level, but also as for their foreseeable implications on users, society and economy) and derive further requirements, orientations and inputs for the research. In this paper we focus on the problems of interconnecting existing testbed environments via the Internet and providing a virtual unifying laboratory that will support academia, research centers and industry in their research on networks and services. In such a facility important issues of trust, security, confidentiality and integrity of data may arise especially for commercial (or not) organizations. In this paper we investigate such issues and present the design of a secure and robust architectural model for interconnecting testbeds of wireless sensor networks.
Abstract: In this paper,we propose a new scheme for the on
demand use of capacity in OBS networks, combining a two-way
reservation protocol and an assembly scheme process that incorporates an aggressive and forward resource reservation mechanism.The key idea is to tune the assembly timer to be equal to the
time associated with the establishment of the end-to-endconnection (round-trip-time) and, thus, synchronize the resource reservation with the assembly process. In this way, upon the arrival of
the first packet in the queue, reservation of resources may start simultaneously based on an aggressive prediction of the burst length.
Abstract: In this work, we present a concrete framework, based on web services-oriented architecture, for integrating small programmable objects in the web of things.
Functionality and data gathered by the Small Programmable Objects (SPO) are exposed using Web Services. Based on this, by exploiting XML encoding, SPO can be comprehensible by any web application. The architecture proposed is focused in providing secure and efficient interoperability between SPO and the web.
Additionally, the proposed architecture provides management capabilities for deploying, maintaining and operating SPO applications across multiple networks.
We present the multilayer architecture of our system and its implementation, which uses a combination of Java Standard and Micro Editions. Finally, we present a case study presenting our implementation. In this application we use SunSPOTs, which are wireless network motes developed by Sun Microsystems.
Abstract: We study the problem of fast and energy-efficient data collection of sensory data using a mobile sink, in wireless sensor networks in which both the sensors and the sink move. Motivated by relevant applications, we focus on dynamic sensory mobility and heterogeneous sensor placement. Our approach basically suggests to exploit the sensor motion to adaptively propagate information based on local conditions (such as high placement concentrations), so that the sink gradually “learns” the network and accordingly optimizes its motion. Compared to relevant solutions in the state of the art (such as the blind random walk, biased walks, and even optimized deterministic sink mobility), our method significantly reduces latency (the improvement ranges from 40% for uniform placements, to 800% for heterogeneous ones), while also improving the success rate and keeping the energy dissipation at very satisfactory levels.
Abstract: We investigate the problem of efficient data collection in wireless sensor networks where both the sensors and the sink move. We especially study the important, realistic case where the spatial distribution of sensors is non-uniform and their mobility is diverse and dynamic. The basic idea of our protocol is for the sink to benefit of the local information that sensors spread in the network as they move, in order
to extract current local conditions and accordingly adjust its trajectory. Thus, sensory motion anyway present in the network serves as a low cost replacement of network information propagation. In particular, we investigate two variations of our method: a)the greedy motion of the sink towards the region of highest density each time and b)taking into account the aggregate density in wider network regions. An extensive comparative evaluation to relevant data collection methods (both randomized and optimized deterministic), demonstrates that our approach achieves significant performance gains, especially in non-uniform placements (but also in uniform ones). In fact, the greedy version of our approach is more suitable in networks where the concentration regions appear in a spatially balanced manner, while the aggregate scheme is more appropriate in networks where the concentration areas are geographically correlated.
Abstract: We consider the problem of planning a mixed line
rates (MLR) wavelength division multiplexing (WDM) transport
optical network. In such networks, different modulation formats
are usually employed to support the transmission at different line
rates. Previously proposed planning algorithms, have used a
transmission reach limit for each modulation format/line rate,
mainly driven by single line rate systems. However, transmission
experiments in MLR networks have shown that physical layer
interference phenomena are more significant between
transmissions that utilize different modulation formats. Thus, the
transmission reach of a connection with a specific modulation
format/line rate depends also on the other connections that copropagate
with it in the network. To plan a MLR WDM network,
we present routing and wavelength assignment (RWA)
algorithms that take into account the adaptation of the
transmission reach of each connection according to the use of the
modulation formats/line rates in the network. The proposed
algorithms are able to plan the network so as to alleviate
interference effects, enabling the establishment of connections of
acceptable quality over paths that would otherwise be prohibited
Abstract: Motivated by emerging applications, we consider sensor networks where the sensors themselves (not just the sinks) are mobile. Furthermore, we focus on mobility scenarios characterized by heterogeneous, highly changing mobility roles in the network. To capture these high dynamics of diverse sensory motion we propose a novel network parameter,
the mobility level, which, although simple and local, quite accurately takes into account both the spatial and speed characteristics of motion. We then propose adaptive data dissemination protocols that use the mobility level estimation to optimize performance, by basically exploiting high mobility (redundant message ferrying) as a cost-effective replacement of flooding, e.g. the sensors tend to dynamically propagate less data in the presence
of high mobility, while nodes of high mobility are favored for moving data around. These dissemination schemes are enhanced by a distance-sensitive probabilistic message flooding inhibition mechanism that further reduces communication cost, especially for fast nodes of high mobility level, and as distance to data destination decreases. Our simulation findings
demonstrate significant performance gains of our protocols compared to non-adaptive protocols, i.e. adaptation increases the success rate and reduces latency (even by 15%) while at the same time significantly reducing energy dissipation (in most cases by even 40%). Also, our adaptive schemes achieve significantly higher message delivery ratio and
satisfactory energy-latency trade-offs when compared to flooding when sensor nodes have
limited message queues.
Abstract: We introduce a new modelling assumption for wireless sensor networks, that of node redeployment (addition of sensor devices during protocol evolution) and we extend the modelling assumption of heterogeneity (having sensor devices of various types). These two features further increase the highly dynamic nature of such networks and adaptation becomes a powerful technique for protocol design. Under these modelling assumptions, we design, implement and evaluate a new power conservation scheme for efficient data propagation. Our scheme is adaptive: it locally monitors the network conditions (density, energy) and accordingly adjusts the sleep-awake schedules of the nodes towards improved operation choices. The scheme is simple, distributed and does not require exchange of control messages between nodes.
Implementing our protocol in software we combine it with two well-known data propagation protocols and evaluate the achieved performance through a detailed simulation study using our extended version of the network simulator ns-2. We focus on highly dynamic scenarios with respect to network density, traffic conditions and sensor node resources. We propose a new general and parameterized metric capturing the trade-offs between delivery rate, energy efficiency and latency. The simulation findings demonstrate significant gains (such as more than doubling the success rate of the well-known Directed Diffusion propagation protocol) and good trade-offs achieved. Furthermore, the redeployment of additional sensors during network evolution and/or the heterogeneous deployment of sensors, drastically improve (when compared to ``equal total power" simultaneous deployment of identical sensors at the start) the protocol performance (i.e. the success rate increases up to four times} while reducing energy dissipation and, interestingly, keeping latency low).
Abstract: Clustering is a crucial network design approach to enable large-scale wireless sensor networks (WSNs) deployments. A large variety of clustering approaches has been presented focusing on different performance metrics. Such protocols usually aim at minimizing communication overhead, evenly distributing roles among the participating nodes, as well as controlling the network topology. Simulations on such protocols are performed using theoretical models that are based on unrealistic assumptions like the unit disk graph communication model, ideal wireless communication channels and perfect energy consumption estimations. With these assumptions taken for granted, theoretical models claim various performance milestones that cannot be achieved in realistic conditions. In this paper, we design a new clustering protocol that adapts to the changes in the environment and the needs and goals of the user applications. We address the issues that hinder its performance due to the real environment conditions and provide a deployable protocol. The implementation, integration and experimentation of this new protocol and it's optimizations, were performed using the \textsf{WISEBED} framework. We apply our protocol in multiple indoors wireless sensor testbeds with multiple experimental scenarios to showcase scalability and trade-offs between network properties and configurable protocol parameters. By analysis of the real world experimental output, we present results that depict a more realistic view of the clustering problem, regarding adapting to environmental conditions and the quality of topology control. Our study clearly demonstrates the applicability of our approach and the benefits it offers to both research \& development communities.
Abstract: Wireless Sensor Networks are by nature highly dynamic and communication between sensors is completely ad hoc, especially when mobile devices are part of the setup. Numerous protocols and applications proposed for such networks
operate on the assumption that knowledge of the neighborhood is a priori available to all nodes. As a result, WSN deployments need to use or implement from scratch a neighborhood discovery mechanism. In this work we present a new protocol based on adaptive periodic beacon exchanges. We totally avoid continuous beaconing by adjusting the rate of broadcasts using the concept of consistency over the understanding of neighborhood that nearby devices share. We propose, implement and evaluate our adaptive neighborhood discovery protocol over our experimental testbed and using large scale simulations. Our results indicate that the
new protocol operates more eciently than existing reference implementations while it provides valid information to applications that use it. Extensive performance evaluation indicates that it successfully reduces generated network traffic by 90% and increases network lifetime by 20% compared to existing mechanisms that rely on continuous beaconing.
Abstract: We study the problem of secure routing in wireless sensor networks where the sensors and the sink can move during the execution of remote monitoring applications and communication is not necessarily directed towards the sink. We present a new routing protocol that builds upon a collection of mechanisms so that the integrity and confidentiality of the information reported to the controlling authorities is secured. The mechanisms are simple to implement, rely only on local information and require O(1) storage per sensor. The protocol adapts to mobility and security challenges that may arise throughout the execution of the application. We take special care for wireless sensor networks that monitor dynamically changing environments and applications that require its operation for extended periods of time. APSR can detect when the current network conditions are about to change and becomes ready for adaption to the new conditions. We demonstrate how to deal with inside and outside attacks even when the network is adapting to internal and/or external events.
Abstract: Motivated by emerging applications, we consider sensor networks where the sensors themselves
(not just the sinks) are mobile. Furthermore, we focus on mobility
scenarios characterized by heterogeneous, highly changing mobility
roles in the network.
To capture these high dynamics of diverse sensory motion
we propose a novel network parameter, the mobility level, which, although
simple and local, quite accurately takes into account both the
spatial and speed characteristics of motion. We then propose
adaptive data dissemination protocols that use the
mobility level estimation to optimize performance, by basically
exploiting high mobility (redundant message ferrying) as a cost-effective
replacement of flooding, e.g., the sensors tend to dynamically propagate
less data in the presence of high mobility, while nodes of high mobility
are favored for moving data around.
These dissemination schemes are enhanced by a distance-sensitive
probabilistic message flooding inhibition mechanism that
further reduces communication cost, especially for fast nodes
of high mobility level, and as distance to data destination
decreases. Our simulation findings demonstrate significant
performance gains of our protocols compared to non-adaptive
protocols, i.e., adaptation increases the success rate and reduces
latency (even by 15\%) while at the same time significantly
reducing energy dissipation (in most cases by even 40\%).
Also, our adaptive schemes achieve significantly
higher message delivery ratio and satisfactory energy-latency
trade-offs when compared to flooding when sensor nodes have limited message queues.
Abstract: Motivated by emerging applications, we consider sensor networks where the sensors themselves
(not just the sinks) are mobile. We focus on mobility
scenarios characterized by heterogeneous, highly changing mobility
roles in the network.
To capture these high dynamics
we propose a novel network parameter, the mobility level, which, although
simple and local, quite accurately takes into account both the
spatial and speed characteristics of motion. We then propose
adaptive data dissemination protocols that use the
mobility level estimation to improve performance. By basically
exploiting high mobility (redundant message ferrying) as a cost-effective
replacement of flooding, e.g., the sensors tend to dynamically propagate
less data in the presence of high mobility, while nodes of high mobility
are favored for moving data around.
These dissemination schemes are enhanced by a distance-sensitive
probabilistic message flooding inhibition mechanism that
further reduces communication cost, especially for fast nodes
of high mobility level, and as distance to data destination
decreases. Our simulation findings demonstrate significant
performance gains of our protocols compared to non-adaptive
protocols, i.e., adaptation increases the success rate and reduces
latency (even by 15\%) while at the same time significantly
reducing energy dissipation (in most cases by even 40\%).
Also, our adaptive schemes achieve significantly
higher message delivery ratio and satisfactory energy-latency
trade-offs when compared to flooding when sensor nodes have limited message queues.
Abstract: The last few years we are witnessing a growing trend towards a stronger connection between the natural
and the digital domain; the digital domain steadily becomes a part of our everyday lives, while we monitor
natural processes and activities in greater detail with the aid of the digital domain. Wireless Sensor Networks
are a recent technological advancement that attempts to deal specifically with both trends.
Abstract: Data propagation in wireless sensor networks can be performed either by hop-by-hop single transmissions or by multi-path broadcast of data. Although several energy-aware MAC layer protocols exist that operate very well in the case of single point-to-point transmissions, none is especially designed and suitable for multiple broadcast transmissions. The key idea of our protocols is the passive monitoring of local network conditions and the adaptation of the protocol operation accordingly. The main contribution of our adaptive method is to proactively avoid collisions by implicitly and early enough sensing the need for collision avoidance. Using the above ideas, we design, implement and evaluate three different, new strategies for proactive adaptation. We show, through a detailed and extended simulation evaluation, that our parameter-based family of protocols for multi-path data propagation significantly reduce the number of collisions and thus increase the rate of successful message delivery (to above 90%) by achieving satisfactory trade-offs with the average propagation delay. At the same time, our protocols are shown to be very energy efficient, in terms of the average energy dissipation per delivered message.
Abstract: We consider sensor networks where the sensor nodes are attached on entities that move in a highly dynamic, heterogeneous manner. To capture this mobility diversity we introduce a new network parameter, the direction-aware mobility
level, which measures how fast and close each mobile node is expected to get to the data destination (the sink). We then provide local, distributed data dissemination protocols
that adaptively exploit the node mobility to improve performance. In particular, "high" mobility is used as a low cost replacement for data dissemination (due to the ferrying of data), while in the case of "low" mobility either a) data propagation redundancy is increased (when highly mobile neighbors exist) or b) long-distance data transmissions are used (when the entire neighborhood is of low mobility) to accelerate data dissemination towards the sink. An extensive performance comparison to relevant methods from
the state of the art demonstrates signicant improvements i.e. latency is reduced by even 4 times while keeping energy dissipation and delivery success at very satisfactory levels.
Abstract: We investigate the problem of ecient wireless energy recharging in Wireless Rechargeable Sensor Networks (WRSNs). In
such networks a special mobile entity (called the Mobile Charger) traverses the network and wirelessly replenishes the energy
of sensor nodes. In contrast to most current approaches, we envision methods that are distributed, adaptive and use limited
network information. We propose three new, alternative protocols for ecient recharging, addressing key issues which we
identify, most notably (i) to what extent each sensor should be recharged (ii) what is the best split of the total energy between
the charger and the sensors and (iii) what are good trajectories the MC should follow. One of our protocols (
LRP
) performs
some distributed, limited sampling of the network status, while another one (
RTP
) reactively adapts to energy shortage alerts
judiciously spread in the network. As detailed simulations demonstrate, both protocols signicantly outperform known state
of the art methods, while their performance gets quite close to the performance of the global knowledge method (
GKP
) we
also provide, especially in heterogeneous network deployments.
Abstract: We argue the case for a new paradigm for architecting structured P2P overlay networks, coined AESOP. AESOP consists of 3 layers: (i) an architecture, PLANES, that ensures significant performance speedups, assuming knowledge of altruistic peers; (ii) an accounting/auditing layer, AltSeAl, that identifies and validates altruistic peers; and (iii) SeAledPLANES, a layer that facilitates the coordination/collaboration of the previous two components. We briefly present these components along with experimental and analytical data of the promised significant performance gains and the related overhead. In light of these very encouraging results, we put this three-layer architecture paradigm forth as the way to structure the P2P overlay networks of the future.
Abstract: Wireless sensor networks are comprised of a vast number of
ultra-small autonomous computing, communication and sensing devices,
with restricted energy and computing capabilities, that co-operate
to accomplish a large sensing task. Such networks can be very useful
in practice, e.g.~in the local monitoring of ambient conditions and
reporting them to a control center. In this paper we propose a
distributed group key establishment protocol that uses mobile agents
(software) and is particularly suitable for energy constrained,
dynamically evolving ad-hoc networks. Our approach totally avoids
the construction and the maintenance of a distributed structure that
reflects the topology of the network. Moreover, it trades-off
complex message exchanges by performing some amount of additional
local computations in order to be applicable at dense and dynamic
sensor networks. The extra computations are simple for the devices
to implement and are evenly distributed across the participants of
the network leading to good energy balance. We evaluate the
performance of our protocol in a simulated environment and compare
our results with existing group key establishment protocols. The
security of the protocol is based on the Diffie-Hellman problem and
we used in our experiments its elliptic curve analog. Our findings
basically indicate the feasibility of implementing our protocol in
real sensor network devices and highlight the advantages and
disadvantages of each approach given the available technology and
the corresponding efficiency (energy, time) criteria.
Abstract: We investigate the problem of efficient data collection in wireless sensor networks where both the sensors and the sink move. We especially study the important, realistic case where the spatial distribution of sensors is non-uniform and their mobility is diverse and dynamic. The basic idea of our protocol is for the sink to benefit of the local information that sensors spread in the network as they move, in order to extract current local conditions and accordingly adjust its trajectory. Thus, sensory motion anyway present in the network serves as a low cost replacement of network information propagation. In particular, we investigate two variations of our method: a) the greedy motion of the sink towards the region of highest density each time and b) taking into account the aggregate density in wider network regions. An extensive comparative evaluation to relevant data collection methods (both randomized and optimized deterministic), demonstrates that our approach achieves significant performance gains, especially in non-uniform placements (but also in uniform ones). In fact, the greedy version of our approach is more suitable in networks where the concentration regions appear in a spatially balanced manner, while the aggregate scheme is more appropriate in networks where the concentration areas are geographically correlated. We also investigate the case of multiple sinks by suggesting appropriate distributed coordination methods.
Abstract: As a result of recent significant technological advances, a new computing and communication environment, Mobile Ad Hoc Networks (MANET), is about to enter the mainstream. A multitude of critical aspects, including mobility, severe limitations and limited reliability, create a new set of crucial issues and trade-offs that must be carefully taken into account in the design of robust and efficient algorithms for these environments. The communication among mobile hosts is one among the many issues that need to be resolved efficiently before MANET becomes a commodity.
In this paper, we propose to discuss the communication problem in MANET as well as present some characteristic techniques for the design, the analysis and the performance evaluation of distributed communication protocols for mobile ad hoc networks. More specifically, we propose to review two different design techniques. While the first type of protocols tries to create and maintain routing paths among the hosts, the second set of protocols uses a randomly moving subset of the hosts that acts as an intermediate pool for receiving and delivering messages. We discuss the main design choices for each approach, along with performance analysis of selected protocols.
Abstract: We discuss some new algorithmic and complexity issues in
coalitional and dynamic/evolutionary games, related to the understand-
ing of modern selŻsh and Complex networks.
In particular: (a) We examine the achievement of equilibria via natural
distributed and greedy approaches in networks. (b) We present a model
of a coalitional game in order to capture the anarchy cost and complexity
of constructing equilibria in such situations. (c) We propose a stochastic
approach to some kinds of local interactions in networks, that can be
viewed also as extensions of the classical evolutionary game theoretic
setting.
Abstract: This paper reviews the work performed under the
European ESPRIT project DO_ALL (Digital OpticAL Logic
modules) spanning from advanced devices (semiconductor optical
amplifiers) to all-optical modules (laser sources and gates) and
from optical signal processing subsystems (packet clock recovery,
optical write/store memory, and linear feedback shift register) to
their integration in the application level for the demonstration of
nontrivial logic functionality (all-optical bit-error-rate tester and
a 2 2 exchange–bypass switch). The successful accomplishment
of the project˘s goals has opened the road for the implementation
of more complex ultra-high-speed all-optical signal processing
circuits that are key elements for the realization of all-optical
packet switching networks.
Abstract: Algorithms are presented for the all-pairs min-cut problem in bounded tree-width, planar, and sparse networks. The approach used is to preprocess the inputn-vertex network so that afterward, the value of a min-cut between any two vertices can be efficiently computed. A tradeoff is shown between the preprocessing time and the time taken to compute min-cuts subsequently. In particular, after anO(n log n) preprocessing of a bounded tree-width network, it is possible to find the value of a min-cut between any two vertices in constant time. This implies that for such networks the all-pairs min-cut problem can be solved in timeO(n2). This algorithm is used in conjunction with a graph decomposition technique of Frederickson to obtain algorithms for sparse and planar networks. The running times depend upon a topological property, \~{a}, of the input network. The parameter \~{a} varies between 1 and {\`E}(n); the algorithms perform well when \~{a} = o(n). The value of a min-cut can be found in timeO(n + \~{a}2log \~{a}) and all-pairs min-cut can be solved in timeO(n2 + \~{a}4log \~{a}) for sparse networks. The corresponding running times for planar networks areO(n + \~{a} log \~{a}) andO(n2 + \~{a}3log \~{a}), respectively. The latter bounds depend on a result of independent interest; outerplanar networks have small “mimicking” networks that are also outerplanar.
Abstract: In this paper, we consider the problem of energy balanced data propagation in wireless sensor networks and we generalise previous works by allowing realistic energy assignment. A new modelisation of the process of energy consumption as a random walk along with a new analysis are proposed. Two new algorithms are presented and analysed. The first one is easy to implement and fast to execute. However, it needs a priori assumptions on the process generating data to be propagated. The second algorithm overcomes this need by inferring information from the observation of the process. Furthermore, this algorithm is based on stochastic estimation methods and is adaptive to environmental changes. This represents an important contribution for propagating energy balanced data in wireless sensor netwoks due to their highly dynamic nature.
Abstract: In this paper we study the problem of basic communication
in ad-hoc mobile networks where the deployment area changes in a
highly dynamic way and is unknown. We call such networks
highly changing ad-hoc mobile networks.
For such networks we investigate an efficient communication protocol which extends
the idea (introduced in [WAE01,POMC01]) of exploiting the co-ordinated
motion of a small part of an ad-hoc mobile
network (the ``runners support") to achieve
very fast communication between any two mobile users of the network.
The basic idea of the new protocol presented here is, instead
of using a fixed sized support for the whole duration of the protocol,
to employ a support of some initial (small) size which
adapts (given some time which can be made fast enough) to the
actual levels of traffic and the
(unknown and possibly rapidly changing) network area by
changing its size in order to converge to an optimal size,
thus satisfying certain Quality of Service criteria.
We provide here some proofs of correctness and fault tolerance
of this adaptive approach and we also provide analytical results
using Markov Chains and random walk techniques to show that such
an adaptive approach is, for this class of ad-hoc mobile networks, significantly more efficient than a simple non-adaptive
implementation of the basic ``runners support" idea.
Abstract: We introduce a new modelling assumption in wireless sensor networks, that of node redeployment (addition of sensor devices during the protocol evolution) and we extend the modelling assumption of heterogeneity (having sensor devices of various types). These two features further increase the highly dynamic nature of such networks and adaptation becomes a powerful technique for protocol design. Under this model, we design, implement and evaluate a power conservation scheme for efficient data propagation. Our protocol is adaptive: it locally monitors the network conditions (density, energy) and accordingly adjusts the sleep-awake schedules of the nodes towards best operation choices. Our protocol operates does not require exchange of control messages between nodes to coordinate.Implementing our protocol we combine it with two well-known data propagation protocols and evaluate the achieved performance through a detailed simulation study using our extended version of Ns2. We focus in highly dynamic scenarios with respect to network density, traffic conditions and sensor node resources. We propose a new general and parameterized metric capturing the trade-off between delivery rate, energy efficiency and latency. The simulation findings demonstrate significant gains (such as more than doubling the success rate of the well-known Directed Diffusion propagation paradigm) and good trade-offs. Furthermore, redeployment of sensors during network evolution and/or heterogeneous deployment of sensors drastically improve (when compared to equal total "power" simultaneous deployment of identical sensors at the start) the protocol performance (the success rate increases up to four times while reducing energy dissipation and, interestingly, keeping latency low).
Abstract: We introduce a new model of
ad-hoc mobile networks, which we call hierarchical,
that are comprised of dense subnetworks of mobile
users (corresponding to highly populated
geographical areas, such as cities),
interconnected across access ports
by sparse but frequently used connections
(such as highways).
For such networks, we present
an efficient routing protocol which extends
the idea (introduced in WAE00) of exploiting the co-ordinated
motion of a small part of an ad-hoc mobile
network (the ``support'') to achieve
very fast communication between any two mobile users of the network.
The basic idea of the new protocol presented here is, instead
of using a unique (large) support for the whole network,
to employ a hierarchy of (small) supports (one for each city)
and also take advantage of the regular traffic
of mobile users across the interconnection highways to communicate
between cities.
We combine here theoretical analysis (average case estimations based on random walk properties) and experimental implementations (carried out using the LEDA platform) to claim and validate results showing that such a hierarchical routing approach is,
for this class of ad-hoc mobile networks, significantly more efficient than a simple extension of the
basic ``support'' idea presented in WAE00.
Abstract: Wireless Sensor Networks are complex systems consisting of a number of relatively simple autonomous sensing devices spread on a geographical area. The peculiarity of these devices lies on the constraints they face in relation to their energy reserves and their computational, storage and communication capabilities. The utility of these sensors is to measure certain environmental conditions and to detect critical events in relation to these measurements. Those events thereupon have to be reported to a specific central station namely the “sink”. This data propagation generally has the form of a hop-by-hop transmission. In this framework we work on distributed data propagation protocols which are taking into account the energy reserves of the sensors. In particular following the work of Chatzigiannakis et al. on the Probabilistic Forwarding Protocol (PFR) we present the distributed probabilistic protocol EFPFR, which favors transmission from the less depleted sensors in addition to favor transmissions close to the “optimal line”. This protocol is simple and relies only on local information for propagation decisions. Its main goal is to limit the total amount of energy dissipated per event and therefore to extend the network’s operation duration.
Abstract: We investigate basic communication protocols in ad-hoc mobile networks. We follow the semi-compulsory approach according to which a small part of the mobile users, the support , that moves in a predetermined way is used as an intermediate pool for receiving and delivering messages. Under this approach, we present a new semi-compulsory protocol called the runners in which the members of perform concurrent and continuous random walks and exchange any information given to them by senders when they meet. We also conduct a comparative experimental study of the runners protocol with another existing semi-compulsory protocol, called the snake, in which the members of move in a coordinated way and always remain pairwise adjacent. The experimental evaluation has been carried out in a new generic framework that we developed to implement protocols for mobile computing. Our experiments showed that for both protocols only a small support is required for efficient communication, and that the runners protocol outperforms the snake protocol in almost all types of inputs we considered.
Abstract: The “small world” phenomenon, i.e., the fact that the
global social network is strongly connected in the sense
that every two persons are inter-related through a small
chain of friends, has attracted research attention and has
been strongly related to the results of the social
psychologist˘s Stanley Milgram experiments; properties
of social networks and relevant problems also emerge in
peer-to-peer systems and their study can shed light on
important modern network design properties.
In this paper, we have experimentally studied greedy
routing algorithms, i.e., algorithms that route information
using “long-range” connections that function as
shortcuts connecting “distant” network nodes. In
particular, we have implemented greedy routing
algorithms, and techniques from the recent literature in
networks of line and grid topology using parallelization
for increasing efficiency. To the best of our knowledge, no
similar attempt has been made so far
Abstract: In this paper we present a new approximation algorithm for
the Minimum Energy Broadcast Routing (MEBR) problem in ad hoc
wireless networks that has exponentially better approximation factor
than the well-known Minimum Spanning Tree (MST) heuristic. Namely,
for any instance where a minimum spanning tree of the set of stations
is guaranteed to cost at most ˝ times the cost of an optimal solution
for MEBR, we prove that our algorithm achieves an approximation ra-
tio bounded by 2 ln ˝ ˇ 2 ln 2 + 2. This result is particularly relevant for
its consequences on Euclidean instances where we signiŻcantly improve
previous results.
Abstract: Avertical perspective, ranging from management
and routing to physical layer options, concerning dynamic
network monitoring and compensation of impairments
(M&C),is given.Feasibility, reliability,and performance
improvements on reconfigurable transparent networksare
expected to arise from the consolidated assessment of network management and control specifications, as a more accurate evaluation of available M&C techniques. In the network
layer,physical parameters aware algorithms are foreseen to
pursue reliable network performance. In the physical layer,
some new M&C methods were developed and rating of the state-of-the-art reported in literature is given. Optical monitoring implementation and viability is discussed.
Abstract: We present the conceptual basis and the initial planning for an open source management architecture for wireless sensor networks (WSN). Although there is an abundance of open source tools serving the administrative needs of WSN deployments, there is a lack of tools or platforms for high level integrated WSN management. This is because of a variety of factors, including the lack of open source management tools, the immaturity of tools that offer manageability for WSNs, the limited high level management capabilities of sensor devices and architectures, and the lack of standardization. The current work is, to our knowledge, the first effort to conceptualize, formalize and design a remote, integrated management platform for the support of WSN research laboratories. The platform is based on the integration and extension of two innovative platforms: jWebDust, a WSN operation and management platform, and OpenRSM, an open source integrated remote systems and network management platform. The proposed system architecture can support several levels of integration (infrastructure management, functionality integration, firmware management), corresponding to different use-cases and application settings.
Abstract: Urban road networks are represented as directed graphs, accompanied by a metric which assigns cost functions (rather than scalars) to the arcs, e.g. representing time-dependent arc-traversal-times. In this work, we present oracles for providing time-dependent min-cost route plans, and conduct their experimental evaluation on a real-world data set (city of Berlin). Our oracles are based on precomputing all landmark-to-vertex shortest travel-time functions, for properly selected landmark sets. The core of this preprocessing phase is based on a novel, quite efficient and simple oneto-all approximation method for creating approximations of shortest travel-time functions. We then propose three query algorithms, including a PTAS, to efficiently provide mincost route plan responses to arbitrary queries. Apart from the purely algorithmic challenges, we deal also with several
implementation details concerning the digestion of raw traffic data, and we provide heuristic improvements of both the preprocessing phase and the query algorithms. We conduct an extensive, comparative experimental study with all query algorithms and six landmark sets. Our results are quite encouraging, achieving remarkable speedups (at least by two orders of magnitude) and quite small approximation guarantees, over the time-dependent variant of Dijkstra˘s algorithm.
Abstract: We study the problem of greedy, single path data propaga-
tion in wireless sensor networks, aiming mainly to minimize
the energy dissipation. In particular, we rst mathemat-
ically analyze and experimentally evaluate the energy e-
ciency and latency of three characteristic protocols, each one
selecting the next hop node with respect to a dierent cri-
terion (minimum projection, minimum angle and minimum
distance to the destination). Our analytic and simulation
ndings suggest that any single criterion does not simulta-
neously satisfy both energy eciency and low latency. To-
wards parameterized energy-latency trade-os we provide as
well hybrid combinations of the two criteria (direction and
proximity to the sink). Our hybrid protocols achieve sig-
nicant perfomance gains and allow ne-tuning of desired
performance. Also, they have nice energy balance proper-
ties, and can prolong the network lifetime.
Abstract: We introduce a new model of ad-hoc mobile networks,
which we call hierarchical, that are comprised of
dense subnetworks of mobile users (corresponding to highly
populated geographical areas), interconnected across access
ports by sparse but frequently used connections.
To implement communication in such a case, a possible
solution would be to install a very fast (yet limited) backbone
interconnecting such highly populated mobile user areas, while
employing a hierarchy of (small) supports (one for each lower level
site). This fast backbone provides a limited number of access
ports within these dense areas of mobile users.
We combine here theoretical analysis (average case estimations based on
random walk properties) to claim and validate
results showing that such a hierarchical routing approach is,
for this class of ad-hoc mobile networks, significantly
more efficient than a simple extension of the
basic ``support'' idea presented in [WAE00,DISC01].
Abstract: Motivated by the wavelength assignment problem in WDM optical networks, we study path coloring problems in graphs. Given a set of paths P on a graph G, the path coloring problem is to color the paths of P so that no two paths traversing the same edge of G are assigned the same color and the total number of colors used is minimized. The problem has been proved to be NP-hard even for trees and rings.
Using optimal solutions to fractional path coloring, a natural relaxation of path coloring, on which we apply a randomized rounding technique combined with existing coloring algorithms, we obtain new upper bounds on the minimum number of colors sufficient to color any set of paths on any graph. The upper bounds are either existential or constructive.
The existential upper bounds significantly improve existing ones provided that the cost of the optimal fractional path coloring is sufficiently large and the dilation of the set of paths is small. Our algorithmic results include improved approximation algorithms for path coloring in rings and in bidirected trees. Our results extend to variations of the original path coloring problem arizing in multifiber WDM optical networks.
Abstract: The study of the path coloring problem is motivated by the allocation of optical bandwidth to communication requests in all-optical networks that utilize Wavelength Division Multiplexing (WDM). WDM technology establishes communication between pairs of network nodes by establishing transmitter-receiver paths and assigning wavelengths to each path so that no two paths going through the same fiber link use the same wavelength. Optical bandwidth is the number of distinct wavelengths. Since state-of-the-art technology allows for a limited number of wavelengths, the engineering problem to be solved is to establish communication minimizing the total number of wavelengths used. This is known as the wavelength routing problem. In the case where the underlying network is a tree, it is equivalent to the path coloring problem.
We survey recent advances on the path coloring problem in both undirected and bidirected trees. We present hardness results and lower bounds for the general problem covering also the special case of sets of symmetric paths (corresponding to the important case of symmetric communication). We give an overview of the main ideas of deterministic greedy algorithms and point out their limitations. For bidirected trees, we present recent results about the use of randomization for path coloring and outline approximation algorithms that find path colorings by exploiting fractional path colorings. Also, we discuss upper and lower bounds on the performance of on-line algorithms.
Abstract: A new model for intrusion and its propagation through various attack
schemes in networks is considered. The model is characterized by the number of
network nodes n, and two parameters f and g. Parameter f represents the probability
of failure of an attack to a node and is a gross measure of the level of security of
the attacked system and perhaps of the intruder˘s skills; g represents a limit on
the number of attacks that the intrusion software can ever try, due to the danger
of being discovered, when it issues them from a particular (broken) network node.
The success of the attack scheme is characterized by two factors: the number of
nodes captured (the spread factor) and the number of virtual links that a defense
mechanism has to trace from any node where the attack is active to the origin of
the intrusion (the traceability factor). The goal of an intruder is to maximize both
factors. In our model we present four different ways (attack schemes) by which an
intruder can organize his attacks. Using analytic and experimental methods, we first
show that for any 0 < f < 1, there exists a constant g for which any of our attack
schemes can achieve a {\`E}(n) spread and traceability factor with high probability,
given sufficient propagation time. We also show for three of our attack schemes
that the spread and the traceability factors are, with high probability, linearly related
during the whole duration of the attack propagation. This implies that it will not be
easy for a detection mechanism to trace the origin of the intrusion, since it will have
to trace a number of links proportional to the nodes captured.
Abstract: A new model for intrusion and its propagation through various attack
schemes in networks is considered. The model is characterized by the number of
network nodes n, and two parameters f and g. Parameter f represents the probability
of failure of an attack to a node and is a gross measure of the level of security of
the attacked system and perhaps of the intruder˘s skills; g represents a limit on
the number of attacks that the intrusion software can ever try, due to the danger
of being discovered, when it issues them from a particular (broken) network node.
The success of the attack scheme is characterized by two factors: the number of
nodes captured (the spread factor) and the number of virtual links that a defense
mechanism has to trace from any node where the attack is active to the origin of
the intrusion (the traceability factor). The goal of an intruder is to maximize both
factors. In our model we present four different ways (attack schemes) by which an
intruder can organize his attacks. Using analytic and experimental methods, we first
show that for any 0 < f < 1, there exists a constant g for which any of our attack
schemes can achieve a (n) spread and traceability factor with high probability,
given sufficient propagation time. We also show for three of our attack schemes
that the spread and the traceability factors are, with high probability, linearly related
during the whole duration of the attack propagation. This implies that it will not be
easy for a detection mechanism to trace the origin of the intrusion, since it will have
to trace a number of links proportional to the nodes captured.
Abstract: Collecting sensory data using a mobile data sink has
been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves
significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.
Abstract: Collecting sensory data using a mobile data sink has been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.
Abstract: Collecting sensory data using a mobile data sink has
been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding
density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to
produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves
significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.
Abstract: We examine multi-player pervasive games that rely on the
use of ad-hoc mobile sensor networks. The unique feature in
such games is that players interact with each other and their
surrounding environment by using movement and presence
as a means of performing game-related actions, utilizing sen-
sor devices. We brie
y discuss the fundamental issues and
challenges related to these type of games and the scenar-
ios associated with them. We have also developed a frame-
work, called Fun in Numbers (FinN) that handles a number
of these issues, such as such as neighbors discovery, local-
ization, synchronization and delay-tolerant communication.
FinN is developed using Java and is based on a multilayer ar-
chitecture, which provides developers with a set of templates
and services for building and operating new games.
Abstract: We study the fundamental naming and counting problems in networks that are anonymous, unknown, and possibly dynamic. Network dynamicity is modeled by the 1-interval connectivity model [KLO10]. We first prove that on static networks with broadcast counting is impossible to solve without a leader and that naming is impossible to solve even with a leader and even if nodes know n. These impossibilities carry over to dynamic networks as well. With a leader we solve counting in linear time. Then we focus on dynamic networks with broadcast. We show that if nodes know an upper bound on the maximum degree that will ever appear then they can obtain an upper bound on n. Finally, we replace broadcast with one-to-each, in which a node may send a different message to each of its neighbors. This variation is then proved to be computationally equivalent to a full-knowledge model with unique names.
Abstract: We propose and evaluate fast reservation (FR)
protocols for Optical Burst Switched (OBS) networks. The
proposed reservation schemes aim at reducing the end-to-end
delay of a data burst, by sending the Burst Header Packet (BHP)
in the core network before the burst assembly is completed at the
ingress node. We use linear prediction filters to estimate the
expected length of the burst and the time needed for the
burstification process to complete. A BHP packet carrying these
estimates is sent before burst completion, in order to reserve
bandwidth at each intermediate node for the time interval the
burst is expected to pass from that node. Reducing the total time
needed for a packet to be transported over an OBS network is
important, especially for real-time applications. Reserving
bandwidth only for the time interval it is actual going to be used
by a burst is important for network utilization efficiency. In the
simulations conducted we evaluate the proposed extensions and
prove their usefulness.
Abstract: In this work, we study the propagation of influence and computation in dynamic networks that are possibly disconnected at every instant. We focus on a synchronous message passing communication model with broadcast and bidirectional links. To allow for bounded end-to-end communication we propose a set of minimal temporal connectivity conditions that bound from the above the time it takes for information to make progress in the network. We show that even in dynamic networks that are disconnected at every instant information may spread as fast as in networks that are connected at every instant. Further, we investigate termination criteria when the nodes know some upper bound on each of the temporal connectivity conditions. We exploit our termination criteria to provide efficient protocols (optimal in some cases) that solve the fundamental counting and all-to-all token dissemination (or gossip) problems. Finally, we show that any protocol that is correct in instantaneous connectivity networks can be adapted to work in temporally connected networks.
Abstract: In this work, we study the propagation of influence and computation in dynamic distributed computing systems that are possibly disconnected at every instant. We focus on a synchronous message-passing communication model with broadcast and bidirectional links. Our network dynamicity assumption is a worst-case dynamicity controlled by an adversary scheduler, which has received much attention recently. We replace the usual (in worst-case dynamic networks) assumption that the network is connected at every instant by minimal temporal connectivity conditions. Our conditions only require that another causal influence occurs within every time window of some given length. Based on this basic idea, we define several novel metrics for capturing the speed of information spreading in a dynamic network. We present several results that correlate these metrics. Moreover, we investigate termination criteria in networks in which an upper bound on any of these metrics is known. We exploit our termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental counting and all-to-all token dissemination (or gossip) problems.
Abstract: We study the problem of energy-balanced data propagation in wireless sensor networks. The energy balance property is crucial for maximizing the time the network is functional, by avoiding early energy depletion of a large portion of sensors. We propose a distributed, adaptive data propagation algorithm that exploits limited, local network density information for achieving energy-balance while at the same time
minimizing energy dissipation.
We investigate both uniform and heterogeneous sensor placement distributions. By a detailed experimental evaluation and comparison with well-known energy-balanced protocols, we show that our density-based protocol improves energy efficiency signicantly while also having better energy balance properties.
Furthermore, we compare the performance of our protocol with a centralized, o-line optimum solution derived by a linear program which maximizes the network lifetime and show that it achieves near-optimal performance for uniform sensor deployments.
Abstract: One problem that frequently arises is the establishment of a
secure connection between two network nodes. There are many key
establishment protocols that are based on Trusted Third Parties or
public key cryptography which are in use today. However, in the
case of networks with frequently changing topology and size
composed of nodes of limited computation power, such as the ad-hoc
and sensor networks, such an approach is difficult to apply. One
way of attacking this problem for such networks is to have the two
nodes share some piece of information that will, subsequently,
enable them to transform this information into a shared
communication key.
%
Having each pair of network nodes share some piece of information
is, often, achieved through appropriate {\em key pre-distribution}
schemes. These schemes work by equipping each network node with a
set of candidate key values, some of which shared with other
network nodes possessing other keys sets. Later, when two nodes
meet, they can employ a suitable key establishment protocol in
order to locate shared values and used them for the creation of
the communication key.
%
In this paper we give a formal definition of collusion resistant
key predistribution schemes and then propose such a scheme based
on probabilistically created set systems. The resulting key sets
are shown to have a number of desirable properties that ensure the
confidentiality of communication sessions against collusion
attacks by other network nodes.
Abstract: An intersection graph of n vertices assumes that each vertex is equipped with a subset of a global label set. Two vertices share an edge
when their label sets intersect. Random Intersection Graphs (RIGs) (as defined in [18, 31]) consider label sets formed by the following experiment:
each vertex, independently and uniformly, examines all the labels (m in total) one by one. Each examination is independent and the vertex
succeeds to put the label in her set with probability p. Such graphs nicely capture interactions in networks due to sharing of resources among nodes.
We study here the problem of efficiently coloring (and of finding upper bounds to the chromatic number) of RIGs. We concentrate in a range
of parameters not examined in the literature, namely: (a) m = n{\'a} for less than 1 (in this range, RIGs differ substantially from the Erd¨os- Renyi random graphs) and (b) the selection probability p is quite high
(e.g. at least ln2 n m in our algorithm) and disallows direct greedy colouring methods.
We manage to get the following results:
For the case mp ln n, for any constant < 1 − , we prove that np colours are enough to colour most of the vertices of the graph with high probability (whp). This means that even for quite dense
graphs, using the same number of colours as those needed to properly colour the clique induced by any label suffices to colour almost all of the vertices of the graph. Note also that this range of values of m, p
is quite wider than the one studied in [4].
� We propose and analyze an algorithm CliqueColour for finding a proper colouring of a random instance of Gn,m,p, for any mp >=ln2 n. The algorithm uses information of the label sets assigned to the
vertices of Gn,m,p and runs in O (n2mp2/ln n) time, which is polynomial in n and m. We also show by a reduction to the uniform random
intersection graphs model that the number of colours required by the algorithm are of the correct order of magnitude with the actual
chromatic number of Gn,m,p.
⋆ This work was partially supported by the ICT Programme of the European Union under contract number ICT-2008-215270 (FRONTS). Also supported by Research Training Group GK-693 of the Paderborn Institute for Scientific Computation
(PaSCo).
� We finally compare the problem of finding a proper colouring for Gn,m,p to that of colouring hypergraphs so that no edge is monochromatic.We show how one can find in polynomial time a k-colouring of the vertices of Gn,m,p, for any integer k, such that no clique induced by only one label in Gn,m,p is monochromatic. Our techniques are novel and try to exploit as much as possible the hidden structure of random intersection graphs in this interesting range.
Abstract: We investigate random intersection graphs, a combinatorial model that quite accurately abstracts distributed networks with local interactions between nodes blindly sharing critical resources from a limited globally available domain. We study important combinatorial properties (independence and hamiltonicity) of such graphs. These properties relate crucially to algorithmic design for important problems (like secure communication and frequency assignment) in distributed networks characterized by dense, local interactions and resource limitations, such as sensor networks. In particular, we prove that, interestingly, a small constant number of random, resource selections suffices to make the graph hamiltonian and we provide tight evaluations of the independence number of these graphs.
Abstract: This chapter aims at presenting certain important aspects of the design of lightweight, event-driven algorithmic solutions for data dissemination in wireless sensor networks that provide support for reliable, efficient and concurrency-intensive operation. We wish to emphasize that efficient solutions at several levels are needed, e.g.~higher level energy efficient routing protools and lower level power management schemes. Furthermore, it is important to combine such different level methods into integrated protocols and approaches. Such solutions must be simple, distributed and local. Two useful algorithmic design principles are randomization (to trade-off efficiency and fault-tolerance) and adaptation (to adjust to high network dynamics towards improved operation). In particular, we provide a) a brief description of the technical specifications of state-of-the-art sensor devices b) a discussion of possible models used to abstract such networks, emphasizing heterogeneity, c) some representative power management schemes, and d) a presentation of some characteristic protocols for data propagation. Crucial efficiency properties of these schemes and protocols (and their combinations, in some cases) are investigated by both rigorous analysis and performance evaluations through large scale simulations.
Abstract: The problem of communication among mobile nodes is one of the most fundamental problems in ad hoc mobile networks and is at the core of many algorithms, such as for counting the number of nodes, electing a leader, data processing etc. For an exposition of several important problems in ad hoc mobile networks. The work of Chatzigiannakis, Nikoletseas and Spirakis focuses on wireless mobile networks that are subject to highly dynamic structural changes created by mobility, channel fluctuations and device failures. These changes affect topological connectivity, occur with high frequency and may not be predictable in advance. Therefore, the environment where the nodes move (in three-dimensional space with possible obstacles) as well as the motion that the nodes perform are \textit{input} to any distributed algorithm.
Abstract: In this work, we overview some results concerning communication combinatorial properties in random intersection graphs and uniform random intersection graphs. These properties relate crucially to algorithmic design for important problems (like secure communication and frequency assignment) in distributed networks characterized by dense, local interactions and resource limitations, such as sensor networks. In particular, we present and discuss results concerning the existence of large independent sets of vertices whp in random instances of each of these models. As the main contribution of our paper, we introduce a new, general model, which we denote G(V, χ, f). In this model, V is a set of vertices and χ is a set of m vectors in ℝm. Furthermore, f is a probability distribution over the powerset 2χ of subsets of χ. Every vertex selects a random subset of vectors according to the probability f and two vertices are connected according to a general intersection rule depending on their assigned set of vectors. Apparently, this new general model seems to be able to simulate other known random graph models, by carefully describing its intersection rule.
Abstract: We address an important communication issue arising in
wireless cellular networks that utilize frequency division
multiplexing (FDM) technology. In such networks, many
users within the same geographical region (cell) can communicate
simultaneously with other users of the network
using distinct frequencies. The spectrum of the available
frequencies is limited; thus, efficient solutions to the call
controlproblemareessential.Theobjectiveofthecallcontrol
problem is, given a spectrum of available frequencies
and users that wish tocommunicate, to maximize the benefit,
i.e., the number of users that communicate without
signalinterference.Weconsidercellularnetworksofreuse
distance k ≥ 2 and we study the online version of the
problem using competitive analysis. In cellular networks
of reuse distance 2, the previously best known algorithm
that beats the lower bound of 3 on the competitiveness
of deterministic algorithms, works on networks with one
frequency, achieves a competitive ratio against oblivious
adversaries, which is between 2.469 and 2.651, and uses
a number of random bits at least proportional to the size
of the network.We significantly improve this result by presentingaseriesofsimplerandomizedalgorithmsthathave
competitiveratiossignificantlysmallerthan3,workonnetworks
with arbitrarily many frequencies, and use only a
constant number of random bits or a comparable weak
random source. The best competitiveness upper bound
we obtain is 16/7 using only four random bits. In cellular
networks of reuse distance k > 2, we present simple
randomized online call control algorithms with competitive
ratios, which significantly beat the lower bounds on
the competitiveness of deterministic ones and use only
O(log k )randombits. Also,weshownewlowerboundson
thecompetitivenessofonlinecallcontrolalgorithmsincellularnetworksofanyreusedistance.
Inparticular,weshow
thatnoonline algorithm can achieve competitive ratio better
than 2, 25/12, and 2.5, in cellular networks with reuse
distancek ∈ {2, 3, 4},k = 5,andk ≥ 6, respectively.
Abstract: Clustering is an important research topic for wireless sensor
networks (WSNs). A large variety of approaches has been
presented focusing on dierent performance metrics. Even
though all of them have many practical applications, an ex-
tremely limited number of software implementations is avail-
able to the research community. Furthermore, these very few
techniques are implemented for specic WSN systems or are
integrated in complex applications. Thus it is very difficult
to comparatively study their performance and almost impos-
sible to reuse them in future applications under a dierent
scope. In this work we study a large body of well estab-
lished algorithms. We identify their main building blocks
and propose a component-based architecture for developing
clustering algorithms that (a) promotes exchangeability of
algorithms thus enabling the fast prototyping of new ap-
proaches, (b) allows cross-layer implementations to realize
complex applications, (c) oers a common platform to com-
paratively study the performance of dierent approaches,
(d) is hardware and OS independent. We implement 5 well
known algorithms and discuss how to implement 11 more.
We conduct an extended simulation study to demonstrate
the faithfulness of our implementations when compared to
the original implementations. Our simulations are at very
large scale thus also demonstrating the scalability of the
original algorithms beyond their original presentations. We
also conduct experiments to assess their practicality in real
WSNs. We demonstrate how the implemented clustering
algorithms can be combined with routing and group key es-
tablishment algorithms to construct WSN applications. Our
study clearly demonstrates the applicability of our approach
and the benets it oers to both research & development
communities.
Abstract: We survey here some recent computational models for networks of tiny artifacts. In particular, we focus on networks consisting of artifacts with sensing capabilities. We first imagine the artifacts moving passively, that is, being mobile but unable to control their own movement. This leads us to the population protocol model of Angluin et al. (2004) [16]. We survey this model and some of its recent enhancements. In particular, we also present the mediated population protocol model in which the interaction links are capable of storing states and the passively mobile machines model in which the finite state nature of the agents is relaxed and the agents become multitape Turing machines that use a restricted space. We next survey the sensor field model, a general model capturing some identifying characteristics of many sensor network˘s settings. A sensor field is composed of kinds of devices that can communicate one to the other and also to the environment through input/output data streams. We, finally, present simulation results between sensor fields and population protocols and analyze the capability of their variants to decide graph properties
Abstract: Here we survey various computational models for Wireless Sensor Networks (WSNs). The population protocol model (PP) considers networks of tiny mobile finite-state artifacts that can sense the environment and communicate in pairs to perform a computation. The mediated population protocol model (MPP) enhances the previous model by allowing the communication links to have a constant size buffer, providing more computational power. The graph decision MPP model (GDM) is a special case of MPP that focuses on the MPP's ability to decide graph properties of the network. Another direction towards enhancing the PP is followed by the PALOMA model in which the artifacts are no longer finite-state automata but Turing Machines of logarithmic memory in the population size. A different approach to modeling WSNs is the static synchronous sensor field model (SSSF) which describes devices communicating through a fixed communication graph and interacting with their environment via input and output data streams. In this survey, we present the computational capabilities of each model and provide directions for further research.
Abstract: In this chapter, our focus is on computational network analysis from a theoretical point of view. In particular, we study the \emph{propagation of influence and computation in dynamic distributed computing systems}. We focus on a \emph{synchronous message passing} communication model with bidirectional links. Our network dynamicity assumption is a \emph{worst-case dynamicity} controlled by an adversary scheduler, which has received much attention recently. We first study the fundamental \emph{naming} and \emph{counting} problems (and some variations) in
networks that are \emph{anonymous}, \emph{unknown}, and possibly dynamic. Network dynamicity is modeled here by the \emph{1-interval connectivity model}, in which communication is synchronous and a (worst-case) adversary
chooses the edges of every round subject to the condition that each instance is connected. We then replace this quite strong assumption by minimal \emph{temporal connectivity} conditions. These conditions only require that \emph{another causal influence occurs within every time-window of some given length}. Based on this basic idea we define several novel metrics for capturing the speed of information spreading in a dynamic network. We present several results that correlate these metrics. Moreover, we investigate \emph{termination criteria} in networks in which an upper bound on any of these metrics is known. We exploit these termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental \emph{counting} and \emph{all-to-all token dissemination} (or \emph{gossip}) problems. Finally, we propose another model of worst-case temporal connectivity, called \emph{local
communication windows}, that assumes a fixed underlying communication network and restricts the adversary to allow communication between local neighborhoods in every time-window of some fixed length. We prove some basic properties and provide a protocol for counting in this model.
Abstract: A mimicking network for a k-terminal network, N, is one whose realizable external flows are
the same as those of N. Let S.k/ denote the minimum size of a mimicking network for a k-terminal network.
In this paper we give new constructions of mimicking networks and prove the following results (the values
in brackets are the previously best known results): S.4/ D 5 [216], S.5/ D 6 [232]. For bounded treewidth
networks we show S.k/ D O.k/ [22k ], and for outerplanar networks we show S.k/ · 10k ˇ 6 [k22kC2].
Abstract: In this article, we present a detailed performance
evaluation of a hybrid optical switching (HOS)
architecture called Overspill Routing in Optical Networks
(ORION). The ORION architecture combines
(optical) wavelength and (electronic) packet switching,
so as to obtain the individual advantages of both switching
paradigms. In particular, ORION exploits the possible insertions/extractions, to reduce the necessary
interfaces, do not deteriorate performance and thus the
use of traffic concentrators assure ORION’s economic
viability.
idle periods of established lightpaths to transmit
packets destined to the next common node, or even
directly to their common end-destination. Depending
on whether all lightpaths are allowed to simultaneously
carry and terminate overspill traffic or overspill is restricted
to a sub-set of wavelengths, the architecture
limits itself to constrained or un-constrained ORION. To
evaluate both cases, we developed an extensive network
simulator where the basic features of the ORION architectureweremodeled,
including suitable edge/core node
switches and load-varying sources to simulate overloading
traffic conditions. Further, we have assessed various
aspects of the ORION architecture including two
basic routing/forwarding policies and various buffering
schemes. The complete network study shows that
ORION can absorb temporal traffic overloads, as intended,
provided sufficient buffering is present.We also
demonstrate that the restriction of simultaneous packet
Abstract: Ever-increasing bandwidth demands and higher flexibility are the main challenges for the next generation optical core networks. A new trend in order to address these challenges is to consider the impairments of the lightpaths during the design of optical networks. In our work, we focus on translucent optical networks, where some lightpaths are routed transparently, whereas others go through a number of regenerators. We present a cost analysis of design strategies, which are based either on an exact Quality of Transmission (QoT) validation or on a relaxed one and attempt to reduce the amount of regenerators used. In the exact design strategy, regenerators are required if the QoT of a candidate lightpath is below a predefined threshold, assuming empty network conditions. In the relaxed strategy, this predefined threshold is lower, while it is assumed that the network is fully loaded. We evaluate techno-economically the suggested design solutions and also show that adding more flexibility to the optical nodes has a large impact to the total infrastructure cost.
Abstract: We investigate the existence of optimal tolls for atomic symmetric
network congestion games with unsplittable traffic and arbitrary non-negative and
non-decreasing latency functions.We focus on pure Nash equilibria and a natural
toll mechanism, which we call cost-balancing tolls. A set of cost-balancing tolls
turns every path with positive traffic on its edges into a minimum cost path. Hence
any given configuration is induced as a pure Nash equilibrium of the modified
game with the corresponding cost-balancing tolls. We show how to compute in
linear time a set of cost-balancing tolls for the optimal solution such that the total
amount of tolls paid by any player in any pure Nash equilibrium of the modified
game does not exceed the latency on the maximum latency path in the optimal
solution. Our main result is that for congestion games on series-parallel networks
with increasing latencies, the optimal solution is induced as the unique pure Nash
equilibrium of the game with the corresponding cost-balancing tolls. To the best
of our knowledge, only linear congestion games on parallel links were known to
admit optimal tolls prior to this work. To demonstrate the difficulty of computing
a better set of optimal tolls, we show that even for 2-player linear congestion
games on series-parallel networks, it is NP-hard to decide whether the optimal
solution is the unique pure Nash equilibrium or there is another equilibrium of
total cost at least 6/5 times the optimal cost.
Abstract: Counting in general, and estimating the cardinality of (multi-) sets in particular, is highly desirable for a large variety of applications, representing a foundational block for the efficient deployment and access of emerging internet-scale information systems. Examples of such applications range from optimizing query access plans in internet-scale databases, to evaluating the significance (rank/score) of various data items in information retrieval applications. The key constraints that any acceptable solution must satisfy are: (i) efficiency: the number of nodes that need be contacted for counting purposes must be small in order to enjoy small latency and bandwidth requirements; (ii) scalability, seemingly contradicting the efficiency goal: arbitrarily large numbers of nodes nay need to add elements to a (multi-) set, which dictates the need for a highly distributed solution, avoiding server-based scalability, bottleneck, and availability problems; (iii) access and storage load balancing: counting and related overhead chores should be distributed fairly to the nodes of the network; (iv) accuracy: tunable, robust (in the presence of dynamics and failures) and highly accurate cardinality estimation; (v) simplicity and ease of integration: special, solution-specific indexing structures should be avoided. In this paper, first we contribute a highly-distributed, scalable, efficient, and accurate (multi-) set cardinality estimator. Subsequently, we show how to use our solution to build and maintain histograms, which have been a basic building block for query optimization for centralized databases, facilitating their porting into the realm of internet-scale data networks.
Abstract: We consider a synchronous distributed system with n processes that communicate through a dynamic network guaranteeing 1-interval connectivity i.e., the network topology graph might change at each interval while keeping the graph connected at any time. The processes belonging to the distributed system are identified through a set of labels L = {l1 , l2 . . . , lk } (with 1 ≤ k < n). In this challenging system model, the paper addresses the following problem: ”counting the number of processes with the same label”. We provide a distributed algorithm that is able solve the problem based on the notion of energy transfer. Each process owns a fixed energy charge, and tries to discharge itself exchanging, at each round, at most half of its charge with neighbors. The paper also discusses when such counting is possible in the presence of failures. Counting processes with the same label in dynamic networks with homonyms is of great importance because it is as difficult as computing generic aggregating functions.
Abstract: Random walks in wireless sensor networks can serve as fully
local, very simple strategies for sink motion that reduce energy dissipa-
tion a lot but increase the latency of data collection. To achieve satis-
factory energy-latency trade-offs the sink walks can be made adaptive,
depending on network parameters such as density and/or history of past
visits in each network region; but this increases the memory require-
ments. Towards better balances of memory/performance, we propose two
new random walks: the Random Walk with Inertia and the Explore-and-
Go Random Walk; we also introduce a new metric (Proximity Varia-
tion) that captures the different way each walk gets close to the network
nodes. We implement the new walks and experimentally compare them
to known ones. The simulation findings demonstrate that the new walk˘s
performance (cover time) gets close to the one of the (much stronger)
biased walk, while in some other respects (partial cover time, proximity
variation) they even outperform it. We note that the proposed walks
have been fine-tuned in the light of experimental findings.
Abstract: DAP (Distributed Algorithms Platform) is a generic and homogeneous simulation environment aiming at the implementation, simulation, and testing of distributed algorithms for wired and wireless networks. In this work, we present its architecture, the most important design decisions, and discuss its distinct features and functionalities. DAP allows the algorithm designer to implement a distributed protocol by creating his own customized environment, and programming in a standard programming language in a style very similar to that of a real-world application. DAP provides a graphical user interface that allows the designer to monitor and control the execution of simulations, visualize algorithms, as well as gather statistics and other information for their experimental analysis and testing.
Abstract: In this work we examine a task scheduling and data migration problem for Grid Networks, which we refer to as the Data Consolidation (DC) problem. DC arises when a task needs for its execution two or more pieces of data, possibly scattered throughout the Grid Network. In such a case, the scheduler and the data manager must select the data replicas to be used and the site where these will accumulate for the task to be executed. The policies for selecting the data replicas and the data consolidating site comprise the Data Consolidation problem. We propose and experimentally evaluate a number of DC techniques. Our simulation results brace our belief that DC is an important technique for Data Grids since it can substantially improve task delay, network load and other performance related parameters.
Abstract: The purpose of the first Student Workshop on Wireless Sensor Networks is to bring together both graduate and undergraduate students working in the area of wireless sensor networks, with focus on applications, real-world experiments or deployments of wireless sensor networks. Students will have the opportunity to interact with their peers and publicize and get feedback on their work, exchange experiences, make contacts, and learn what other students are doing in the Wireless Sensor Networks area. The workshop would be a day-long program organized in such a way to promote lively discussions.
Abstract: The Greek School Network (GSN) is the nationwide network that connects all units of primary and secondary education in Greece. GSN offers a significant set of diverse services to more than 15.000 schools and administrative units, and more than 60.000 teachers, placing GSN second in infrastructure size nationwide. GSN has relied on the emerging power of open source software to build cutting-edge services capable of covering internal administrative and monitoring needs, end user demands, and, foremost, modern pedagogical requirements for tools and services. GSN provides a wide set of advanced services, varying from web mail to virtual classrooms and synchronous/asynchronous tele-education. This paper presents an evaluation of GSN open source services based on the opinions of users who use GSN for educational purposes, and on usage and traffic measurement statistics. The paper reaches the conclusion that open source software provides a sound technological platform that meets the needs for cutting edge educational services deployment, and innovative, competitive software production for educational networks.
Abstract: In this work we introduce two practical and interesting models of ad-hoc mobile networks: (a) hierarchical ad-hoc networks, comprised of dense subnetworks of mobile users interconnected by a very fast yet limited backbone infrastructure, (b) highly changing ad-hoc networks, where the deployment area changes in a highly dynamic way and is unknown to the protocol. In such networks, we study the problem of basic communication, i.e., sending messages from a sender node to a receiver node. For highly changing networks, we investigate an efficient communication protocol exploiting the coordinated motion of a small part of an ad-hoc mobile network (the ldquorunners supportrdquo) to achieve fast communication. This protocol instead of using a fixed sized support for the whole duration of the protocol, employs a support of some initial (small) size which adapts (given some time which can be made fast enough) to the actual levels of traffic and the (unknown and possibly rapidly changing) network area, by changing its size in order to converge to an optimal size, thus satisfying certain Quality of Service criteria. Using random walks theory, we show that such an adaptive approach is, for this class of ad-hoc mobile networks, significantly more efficient than a simple non-adaptive implementation of the basic ldquorunners supportrdquo idea, introduced in [9,10]. For hierarchical ad-hoc networks, we establish communication by using a ldquorunnersrdquo support in each lower level of the hierarchy (i.e., in each dense subnetwork), while the fast backbone provides interconnections at the upper level (i.e., between the various subnetworks). We analyze the time efficiency of this hierarchical approach. This analysis indicates that the hierarchical implementation of the support approach significantly outperforms a simple implementation of it in hierarchical ad-hoc networks. Finally, we discuss a possible combination of the two approaches above (the hierarchical and the adaptive ones) that can be useful in ad-hoc networks that are both hierarchical and highly changing. Indeed, in such cases the hierarchical nature of these networks further supports the possibility of adaptation.
Abstract: In this work we investigate the problem of communication among mobile hosts, one of the most fundamental problems in ad-hoc mobile networks that is at the core of many algorithms. Our work investigates the extreme case of total absence of any fixed network backbone or centralized administration, instantly forming networks based only on mobile hosts with wireless communication capabilities, where topological connectivity is subject to frequent, unpredictable change.
For such dynamically changing networks we propose a set of protocols which exploit the coordinated (by the protocol) motion of a small part of the network in order to manage network operations. We show that such protocols can be designed to work correctly and efficiently for communication by avoiding message flooding. Our protocols manage to establish communication between any pair of mobile hosts in small, a-priori guaranteed expected time bounds. Our results exploit and further develop some fundamental properties of random walks in finite graph.
Apart from studying the general case, we identify two practical and interesting cases of ad-hoc mobile networks: a) hierarchical ad-hoc networks, b) highly changing ad-hoc networks, for which we propose protocols that efficiently deal with the problem of basic communication.
We have conducted a set of extensive experiments, comprised of thousands of mobile hosts in order to validate the theoretical results and show that our protocols achieve very efficient communication under different scenaria.
Abstract: Evaluating target tracking protocols for wireless sensor networks that can localize multiple mobile devices, can be a very challenging task. Such protocols usually aim at minimizing communication overhead, data processing for the participating nodes, as well as delivering adequate tracking information of the mobile targets in a timely manner. Simulations on such protocols are performed using theoretical models that are based on unrealistic assumptions like the unit disk graph communication model, ideal network localization and perfect distance estimations. With these assumptions taken for granted, theoretical models claim various performance milestones that cannot be achieved in realistic conditions. In this paper we design a new localization protocol, where mobile assets can be tracked passively via software agents. We address the issues that hinder its performance due to the real environment conditions and provide a deployable protocol. The implementation, integration and experimentation of this new protocol and it's optimizations, were performed using the WISEBED framework. We apply our protocol in multiple indoors wireless sensor testbeds with multiple experimental scenarios to showcase scalability and trade-offs between network properties and configurable protocol parameters. By analysis of the real world experimental output, we present results that depict a more realistic view of the target tracking problem, regarding power consumption and the quality of tracking information. Finally we also conduct some very focused simulations to assess the scalability of our protocol in very large networks and multiple mobile assets.
Abstract: Wireless sensor networks are comprised of a vast number of ultra-small autonomous computing, communication and sensing devices, with restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Such networks can be very useful in practice, e.g.~in the local monitoring of ambient conditions and reporting them to a control center. In this paper we propose a new lightweight, distributed group key establishment protocol suitable for such energy constrained networks. Our approach basically trade-offs complex message exchanges by performing some amount of additional local computations. The extra computations are simple for the devices to implement and are evenly distributed across the participants of the network leading to good energy balance. We evaluate the performance our protocol in comparison to existing group key establishment protocols both in simulated and real environments. The intractability of all protocols is based on the Diffie-Hellman problem and we used its elliptic curve analog in our experiments. Our findings basically indicate the feasibility of implementing our protocol in real sensor network devices and highlight the advantages and disadvantages of each approach given the available technology and the corresponding efficiency (energy, time) criteria.
Abstract: Wireless sensor networks are comprised of a vast number of ultra-small autonomous computing, communication and sensing devices, with restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Such networks can be very useful in practice, e.g.~in the local monitoring of ambient conditions and reporting them to a control center. In this paper we propose a new lightweight, distributed group key establishment protocol suitable for such energy constrained networks. Our approach basically trade-offs complex message exchanges by performing some amount of additional local computations. The extra computations are simple for the devices to implement and are evenly distributed across the participants of the network leading to good energy balance. We evaluate the performance our protocol in comparison to existing group key establishment protocols both in simulated and real environments. The intractability of all protocols is based on the Diffie-Hellman problem and we used its elliptic curve analog in our experiments. Our findings basically indicate the feasibility of implementing our protocol in real sensor network devices and highlight the advantages and disadvantages of each approach given the available technology and the corresponding efficiency (energy, time) criteria.
Abstract: Wireless Sensor Networks consist of a large number of small, autonomous devices, that are able to interact with their inveronment by sensing and collaborate to fulfill their tasks, as, usually, a single node is incapable of doing so; and they use wireless communication to enable this collaboration. Each device has limited computational and energy resources, thus a basic issue in the applicastions of wireless sensor networks is the low energy consumption and hence, the maximization of the network lifetime.
The collected data is disseminated to a static control point – data sink in the network, using node to node - multi-hop data propagation. However, sensor devices consume significant amounts of energy in addition to increased implementation complexity, since a routing protocol is executed. Also, a point of failure emerges in the area near the control center where nodes relay the data from nodes that are farther away. Recently, a new approach has been developed that shifts the burden from the sensor nodes to the sink. The main idea is that the sink has significant and easily replenishable energy reserves and can move inside the area the sensor network is deployed, in order to acquire the data collected by the sensor nodes at very low energy cost. However, the need to visit all the regions of the network may result in large delivery delays.
In this work we have developed protocols that control the movement of the sink in wireless sensor networks with non-uniform deployment of the sensor nodes, in order to succeed an efficient (with respect to both energy and latency) data collection. More specifically, a graph formation phase is executed by the sink during the initialization: the network area is partitioned in equal square regions, where the sink, pauses for a certain amount of time, during the network traversal, in order to collect data.
We propose two network traversal methods, a deterministic and a random one. When the sink moves in a random manner, the selection of the next area to visit is done in a biased random manner depending on the frequency of visits of its neighbor areas. Thus, less frequently visited areas are favored. Moreover, our method locally determines the stop time needed to serve each region with respect to some global network resources, such as the initial energy reserves of the nodes and the density of the region, stopping for a greater time interval at regions with higher density, and hence more traffic load. In this way, we achieve accelerated coverage of the network as well as fairness in the service time of each region.Besides randomized mobility, we also propose an optimized deterministic trajectory without visit overlaps, including direct (one-hop) sensor-to-sink data transmissions only.
We evaluate our methods via simulation, in diverse network settings and comparatively to related state of the art solutions. Our findings demonstrate significant latency and energy consumption improvements, compared to previous research.
Abstract: Wireless sensor networks are a recently introduced category of ad hoc computer networks, which are comprised by nodes of small size and limited computing and energy resources. Such nodes are able of measuring physical properties such as temperature, humidity, etc., wireless communication between each other and in some cases interaction with their surrounding environments (through the use of electromechanical parts).
As these networks have begun to be widely available (in terms of cost and commercial hardware availability), their field of application and philosophy of use is constantly evolving. We have numerous examples of their applications, ranging from monitoring the biodiversity of a specific outdoor area to structural health monitoring of bridges, and also networks ranging from few tens of nodes to even thousands of nodes.
In this PhD thesis we investigated the following basic research lines related to wireless sensor networks:
a) their simulation,
b) the development of data propagation protocols suited to such networks and their evaluation through simulation,
c) the modelling of ``hostile'' circumstances (obstacles) during their operation and evaluation of their impact through simulation,
d) the development of a sensor network management application.
Regarding simulation, we initially placed an emphasis to issues such as the effective simulation of networks of several thousands of nodes, and in that respect we developed a network simulator (simDust), which is extendable through the addition of new data propagation protocols and visualization capabilities. This simulator was used to evaluate the performance of a number of characteristic data propagation protocols for wireless sensor networks. Furthermore, we developed a new protocol (VRTP) and evaluated its performance against other similar protocols. Our studies show that the new protocol, that uses dynamic changes of the transmission range of the network nodes, performs better in certain cases than other related protocols, especially in networks containing obstacles and in the case of non-homogeneous placement of nodes.
Moreover, we emphasized on the addition of ``realistic'' conditions to the simulation of such protocols, that have an adversarial effect on their operation. Our goal was to introduce a model for obstacles that adds little computational overhead to a simulator, and also study the effect of the inclusion of such a model on data propagation protocols that use geographic information (absolute or relative). Such protocols are relatively sensitive to dynamic topology changes and network conditions. Through our experiments, we show that the inclusion of obstacles during simulation can have a significant effect on these protocols.
Finally, regarding applications, we initially proposed an architecture (WebDust/ShareSense), for the management of such networks, that would provide basic capabilities of managing such networks and developing applications above it. Features that set it apart are the capability of managing multiple heterogeneous sensor networks, openess, the use of a peer-to-peer architecture for the interconnection of multiple sensor network. A large part of the proposed architecture was implemented, while the overall architecture was extended to also include additional visualization capabilities.
Abstract: Wireless sensor networks are comprised of a vast number of devices, situated in an area of interest that self organize in a structureless network, in order to monitor/record/measure an environmental variable or phenomenon and subsequently to disseminate the data to the control center.
Here we present research focused on the development, simulation and evaluation of energy efficient algorithms, our basic goal is to minimize the energy consumption. Despite technology advances, the problem of energy use optimization remains valid since current and emerging hardware solutions fail to solve it.
We aim to reduce communication cost, by introducing novel techniques that facilitate the development of new algorithms. We investigated techniques of distributed adaptation of the operations of a protocol by using information available locally on every node, thus through local choices we improve overall performance. We propose techniques for collecting and exploiting limited local knowledge of the network conditions. In an energy efficient manner, we collect additional information which is used to achieve improvements such as forming energy efficient, low latency and fault tolerant paths to route data. We investigate techniques for managing mobility in networks where movement is a characteristic of the control center as well as the sensors. We examine methods for traversing and covering the network field based on probabilistic movement that uses local criteria to favor certain areas.
The algorithms we develop based on these techniques operate a) at low level managing devices, b) on the routing layer and c) network wide, achieving macroscopic behavior through local interactions. The algorithms are applied in network cases that differ in density, node distribution, available energy and also in fundamentally different models, such as under faults, with incremental node deployment and mobile nodes. In all these settings our techniques achieve significant gains, thus distinguishing their value as tools of algorithmic design.
Abstract: We study here the problem of determining the majority type in an arbitrary connected network, each vertex of which has initially two possible types. The vertices may have a few additional possible states and can interact in pairs only if they share an edge. Any (population) protocol is required to stabilize in the initial majority. We first present and analyze a protocol with 4 states per vertex that always computes the initial majority value, under any fair scheduler. As we prove, this protocol is optimal, in the sense that there is no population protocol that always computes majority with fewer than 4 states per vertex. However this does not rule out the existence of a protocol with 3 states per vertex that is correct with high probability. To this end, we examine a very natural majority protocol with 3 states per vertex, introduced in [Angluin et al. 2008] where its performance has been analyzed for the clique graph. We study the performance of this protocol in arbitrary networks. We prove that, when the two initial states are put uniformly at random on the vertices, this protocol converges to the initial majority with probability higher than the probability of converging to the initial minority. In contrast, we present an infinite family of graphs, on which the protocol can fail whp, even when the difference between the initial majority and the initial minority is n−Θ(lnn). We also present another infinite family of graphs in which the protocol of Angluin et al. takes an expected exponential time to converge. These two negative results build upon a very positive result concerning the robustness of the protocol on the clique. Surprisingly, the resistance of the clique to failure causes the failure in general graphs. Our techniques use new domination and coupling arguments for suitably defined processes whose dynamics capture the antagonism between the states involved.
Abstract: A central problem in distributed computing and telecommunications
is the establishment of common knowledge between two computing
entities. An immediate use of such common knowledge is in the
initiation of a secure communication session between two entities
since the two entities may use this common knowledge in order to
produce a secret key for use with some symmetric cipher.
%
The dynamic establishment of shared information (e.g. secret key)
between two entities is particularly important in networks with no
predetermined structure such as wireless mobile ad-hoc networks.
In such networks, nodes establish and terminate communication
sessions dynamically with other nodes which may have never been
encountered before in order to somehow exchange information which
will enable them to subsequently communicate in a secure manner.
%
In this paper we give and theoretically analyze a protocol that
enables two entities initially possessing a string each to
securely eliminate inconsistent bit positions, obtaining strings
with a larger percentage of similarities. This can help the nodes
establish a shared set of bits and use it as a key with some
shared key encryption scheme.
Abstract: We present the first approximate distance oracle for sparse directed networks with time-dependent arc-travel-times determined by continuous, piecewise linear, positive functions possessing the FIFO property. Our approach precomputes (1 + {\aa}) −approximate distance summaries from selected landmark vertices to all other vertices in the network, and provides two sublinear-time query algorithms that deliver constant and (1 + {\'o}) −approximate shortest-travel-times, respectively, for arbitrary origin-destination pairs in the network. Our oracle is based only on the sparsity of the network, along with two quite natural assumptions about travel-time functions which allow the smooth transition towards asymmetric and time-dependent distance metrics.
Abstract: When one engineers distributed algorithms, some special characteristics
arise that are different from conventional (sequential or parallel)
computing paradigms. These characteristics include: the need for either a
scalable real network environment or a platform supporting a simulated
distributed environment; the need to incorporate asynchrony, where arbitrarya
synchrony is hard, if not impossible, to implement; and the generation
of “difficult” input instances which is a particular challenge. In this
work, we identifys ome of the methodological issues required to address
the above characteristics in distributed algorithm engineering and illustrate
certain approaches to tackle them via case studies. Our discussion
begins byad dressing the need of a simulation environment and how asynchronyis
incorporated when experimenting with distributed algorithms.
We then proceed bys uggesting two methods for generating difficult input
instances for distributed experiments, namelya game-theoretic one and another
based on simulations of adversarial arguments or lower bound proofs.
We give examples of the experimental analysis of a pursuit-evasion protocol
and of a shared memorypro blem in order to demonstrate these ideas.
We then address a particularlyi nteresting case of conducting experiments
with algorithms for mobile computing and tackle the important issue of
motion of processes in this context. We discuss the two-tier principle as
well as a concurrent random walks approach on an explicit representation
of motions in ad-hoc mobile networks, which allow at least for averagecase
analysis and measurements and may give worst-case inputs in some
cases. Finally, we discuss a useful interplay between theory and practice
that arise in modeling attack propagation in networks.
Abstract: In this survey, we describe the state of the art for research on experimentally-driven research on networks of tiny artifacts. The main topics are existing and planned practical testbeds, software simulations, and hybrid approaches; in addition, we describe a number of current studies undertaken by the authors.
Abstract: Wireless sensor networks are comprised of a vast number of ultra-small fully autonomous computing, communication and sensing devices, with very restricted energy and computing capabilities, which co-operate to accomplish a large sensing task. Such networks can be very useful in practice in applications that require fine-grain monitoring of physical environment subjected to critical conditions (such as inaccessible terrains or disaster places). Very large numbers of sensor devices can be deployed in areas of interest and use self-organization and collaborative methods to form deeply networked environments. Features including the huge number of sensor devices involved, the severe power, computational and memory limitations, their dense deployment and frequent failures, pose new design and implementation aspects. The efficient and robust realization of such large, highly-dynamic, complex, non-conventional environments is a challenging algorithmic and technological task. In this work we consider certain important aspects of the design, deployment and operation of distributed algorithms for data propagation in wireless sensor networks and discuss some characteristic protocols, along with an evaluation of their performance.
Abstract: An ad hoc mobile network is a collection of mobile hosts, with wireless communication capabilities, forming a temporary network without the aid of any established fixed infrastructure. In such networks, topological connectivity is subject to frequent, unpredictable change. Our work focuses on networks with high rate of such changes to connectivity. For such dynamically changing networks we propose protocols which exploit the co-ordinated (by the protocol) motion of a small part of the network. We show that such protocols can be designed to work correctly and efficiently even in the case of arbitrary (but not malicious) movements of the hosts not affected by the protocol. We also propose a methodology for the analysis of the expected behavior of protocols for such networks, based on the assumption that mobile hosts (those whose motion is not guided by the protocol) conduct concurrent random walks in their motion space. In particular, our work examines the fundamental problem of communication and proposes distributed algorithms for it. We provide rigorous proofs of their correctness, and also give performance analyses by combinatorial tools. Finally, we have evaluated these protocols by experimental means.
Abstract: An ad hoc mobile network is a collection of mobile hosts, with wireless communication capabilities, forming a temporary network without the aid of any established fixed infrastructure. In such networks, topological connectivity is subject to frequent, unpredictable change. Our work focuses on networks with high rate of such changes to connectivity. For such dynamically changing networks we propose protocols which exploit the co-ordinated (by the protocol) motion of a small part of the network. We show that such protocols can be designed to work correctly and efficiently even in the case of arbitrary (but not malicious) movements of the hosts not affected by the protocol. We also propose a methodology for the analysis of the expected behavior of protocols for such networks, based on the assumption that mobile hosts (those whose motion is not guided by the protocol) conduct concurrent random walks in their motion space. In particular, our work examines the fundamental problem of communication and proposes distributed algorithms for it. We provide rigorous proofs of their correctness, and also give performance analyses by combinatorial tools. Finally, we have evaluated these protocols by experimental means.
Abstract: We exploit the game-theoretic ideas presented in [12] to
study the vertex coloring problem in a distributed setting. The vertices
of the graph are seen as players in a suitably defined strategic game,
where each player has to choose some color, and the payoff of a vertex is
the total number of players that have chosen the same color as its own.
We extend here the results of [12] by showing that, if any subset of nonneighboring
vertices perform a selfish step (i.e., change their colors in order
to increase their payoffs) in parallel, then a (Nash equilibrium) proper
coloring, using a number of colors within several known upper bounds
on the chromatic number, can still be reached in polynomial time. We
also present an implementation of the distributed algorithm in wireless
networks of tiny devices and evaluate the performance in simulated and
experimental environments. The performance analysis indicates that it
is the first practically implementable distributed algorithm.
Abstract: Swarms of small devices, that bridge the physical with the digital domain and communicate with each other, have become essential parts of our daily activity, forming networks to support myriads of new and exciting applications. Technology requires such systems to be dependable and adaptive to the user needs, to sudden changes of the environment, and to specific applications characteristics.
Abstract: In this paper we describe a new simulation platform for heterogeneous distributed systems comprised of small programmable objects (e.g., wireless sensor networks) and traditional networked processors. Simulating such systems is complicated because of the need to coordinate compilers and simulators, often with very different interfaces, options, and fidelities.
Our platform (which we call ADAPT) is a flexible and extensible environment that provides a highly scalable simulator with unique characteristics. While the platform provides advanced functionality such as real-time simulation monitoring, custom topologies and scenarios, mixing real and simulated nodes, etc., the effort required by the user and the impact to her code is minimal. We here present its architecture, the most important design decisions, and discuss its distinct features and functionalities. We integrate our simulator to the Sun SPOT platform to enable simulation of sensing applications that employ both low-end and high-end devices programmed with different languages that are internetworked with heterogeneous technologies. We believe that ADAPT will make the development of applications that use small programmable objects more widely accessible and will enable researchers to conduct a joint research approach that combines both theory and practice.
Abstract: We address the issue of measuring distribution fairness in Internet-scale networks. This problem has several interesting instances encountered in different applications, ranging from assessing the distribution of load between network nodes for load balancing purposes, to measuring node utilization for optimal resource exploitation, and to guiding autonomous decisions of nodes in networks built with market-based economic principles. Although some metrics have been proposed, particularly for assessing load balancing algorithms, they fall short. We first study the appropriateness of various known and previously proposed statistical metrics for measuring distribution fairness. We put forward a number of required characteristics for appropriate metrics. We propose and comparatively study the appropriateness of the Gini coefficient (G) for this task. Our study reveals as most appropriate the metrics of G, the fairness index (FI), and the coefficient of variation (CV) in this order. Second, we develop six distributed sampling algorithms to estimate metrics online efficiently, accurately, and scalably. One of these algorithms (2-PRWS) is based on two effective optimizations of a basic algorithm, and the other two (the sequential sampling algorithm, LBS-HL, and the clustered sampling one, EBSS) are novel, developed especially to estimate G. Third, we show how these metrics, and especially G, can be readily utilized online by higher-level algorithms, which can now know when to best intervene to correct unfair distributions (in particular, load imbalances). We conclude with a comprehensive experimentation which comparatively evaluates both the various proposed estimation algorithms and the three most appropriate metrics (G, CV, andFI). Specifically, the evaluation quantifies the efficiency (in terms of number of the messages and a latency indicator), precision, and accuracy achieved by the proposed algorithms when estimating the competing fairness metrics. The central conclusion is that the proposed metric, G, can be estimated with a small number of messages and latency, regardless of the skew of the underlying distribution.
Abstract: Service Oriented Computing and its most famous implementation technology Web Services (WS) are becoming an important enabler of networked business models. Discovery mechanisms are a critical factor to the overall utility of Web Services. So far, discovery mechanisms based on the UDDI standard rely on many centralized and area-specific directories, which poses information stress problems such as performance bottlenecks and fault tolerance. In this context, decentralized approaches based on Peer to Peer overlay networks have been proposed by many researchers as a solution. In this paper, we propose a new structured P2P overlay network infrastructure designed for Web Services Discovery. We present theoretical analysis backed up by experimental results, showing that the proposed solution outperforms popular decentralized infrastructures for web discovery, Chord (and some of its successors), BATON (and it˘s successor) and Skip-Graphs.
Abstract: This paper deals with early obstacles recognition in wireless sensor networks under various traffic
patterns. In the presence of obstacles, the efficiency of routing algorithms is increased by voluntarily avoiding some regions in the vicinity of obstacles, areas which we call dead-ends. In this paper, we first propose a fast convergent routing algorithm with proactive dead-end detection together with a formal definition and description of dead-ends. Secondly, we present a generalization of this algorithm which improves performances in all to many and all to all traffic patterns. In a third part we prove that this algorithm produces paths that are optimal up to a
constant factor of 2đ+1. In a fourth part we consider the reactive version of the algorithm which is an extension of a previously known early obstacle detection algorithm. Finally we give experimental results to illustrate the efficiency of our algorithms in different scenarios.
Abstract: In this paper we present the efficient burst reservation protocol (EBRP) suitable
for bufferless optical burst switching (OBS) networks. The EBRP protocol is a
two-way reservation scheme that employs timed and in-advance reservation of
resources. In the EBRP protocol timed reservations are relaxed, introducing a
reservation time duration parameter that is negotiated during call setup phase.
This feature allows bursts to reserve resources beyond their actual size to
increase their successful forwarding probability and can be used to provide
quality-of-service (QoS) differentiation. The EBRP protocol is suitable for
OBS networks and can guarantee a low blocking probability for bursts that can
tolerate the round-trip delay associated with the two-way reservation.We present
the main features of the proposed protocol and describe in detail the timing
considerations regarding the call setup phase and the actual reservation process.
Furthermore, we show evaluation results and compare the EBRP performance
against two other typical reservation schemes, a tell-and-wait and a tell-and-go
(just-enough-time) like protocol. EBRP has been developed for the control plane
of the IST-LASAGNE project.
Abstract: We propose a new data dissemination protocol for wireless sensor networks, that basically pulls some additional knowledge about the network in order to subsequently improve data forwarding towards the sink. This extra information is still local, limited and obtained in a distributed manner. This extra knowledge is acquired by only a small fraction of sensors thus the extra energy cost only marginally affects the overall protocol efficiency. The new protocol has low latency and manages to propagate data successfully even in the case of low densities. Furthermore, we study in detail the effect of failures and show that our protocol is very robust. In particular, we implement and evaluate the protocol using large scale simulation, showing that it significantly outperforms well known relevant solutions in the state of the art.
Abstract: Andrews et al. [Automatic method for hiding latency in high bandwidth networks, in: Proceedings of the ACM Symposium on Theory of Computing, 1996, pp. 257–265; Improved methods for hiding latency in high bandwidth networks, in: Proceedings of the Eighth Annual ACM Symposium on Parallel Algorithms and Architectures, 1996, pp. 52–61] introduced a number of techniques for automatically hiding latency when performing simulations of networks with unit delay links on networks with arbitrary unequal delay links. In their work, they assume that processors of the host network are identical in computational power to those of the guest network being simulated. They further assume that the links of the host are able to pipeline messages, i.e., they are able to deliver P packets in time O(P+d) where d is the delay on the link.
In this paper we examine the effect of eliminating one or both of these assumptions. In particular, we provide an efficient simulation of a linear array of homogeneous processors connected by unit-delay links on a linear array of heterogeneous processors connected by links with arbitrary delay. We show that the slowdown achieved by our simulation is optimal. We then consider the case of simulating cliques by cliques; i.e., a clique of heterogeneous processors with arbitrary delay links is used to simulate a clique of homogeneous processors with unit delay links. We reduce the slowdown from the obvious bound of the maximum delay link to the average of the link delays. In the case of the linear array we consider both links with and without pipelining. For the clique simulation the links are not assumed to support pipelining.
The main motivation of our results (as was the case with Andrews et al.) is to mitigate the degradation of performance when executing parallel programs designed for different architectures on a network of workstations (NOW). In such a setting it is unlikely that the links provided by the NOW will support pipelining and it is quite probable the processors will be heterogeneous. Combining our result on clique simulation with well-known techniques for simulating shared memory PRAMs on distributed memory machines provides an effective automatic compilation of a PRAM algorithm on a NOW.
Abstract: In large-scale or evolving networks, such as the Internet,
there is no authority possible to enforce a centralized traffic management.
In such situations, Game Theory and the concepts of Nash equilibria
and Congestion Games [8] are a suitable framework for analyzing
the equilibrium effects of selfish routes selection to network delays.
We focus here on layered networks where selfish users select paths to
route their loads (represented by arbitrary integer weights). We assume
that individual link delays are equal to the total load of the link. We
focus on the algorithm suggested in [2], i.e. a potential-based method
for finding pure Nash equilibria (PNE) in such networks. A superficial
analysis of this algorithm gives an upper bound on its time which is
polynomial in n (the number of users) and the sum of their weights. This
bound can be exponential in n when some weights are superpolynomial.
We provide strong experimental evidence that this algorithm actually
converges to a PNE in strong polynomial time in n (independent of the
weights values). In addition we propose an initial allocation of users
to paths that dramatically accelerates this algorithm, compared to an
arbitrary initial allocation. A by-product of our research is the discovery
of a weighted potential function when link delays are exponential to their
loads. This asserts the existence of PNE for these delay functions and
extends the result of
Abstract: In this thesis we investigate the problems of data routing and data collection in wireless sensor networks characterised by intense and higly diverse mobility. We propose a set of protocols that takes exploits the motion of the sensors in order to inform the sink about the network topology. We experimentally evaluate these protocolls in a wide range of topologies, including both homogeneous and heterogeneous ones.
We also investigate random walks as simple motion strategies for mobile sinks that perform data collection from static WSN's. We propose three new random walks that improve latency compared to already known ones, as well as a new metric called Proximity Variation. This metric captures the different way each random walks traverses the network area.
Abstract: We examine a task scheduling and data migration problem for grid networks, which we refer to as the Data Consolidation (DC) problem. DC arises when a task concurrently requests multiple pieces of data, possibly scattered throughout the grid network, that have to be present at a selected site before the task˘s execution starts. In such a case, the scheduler and the data manager must select (i) the data replicas to be used, (ii) the site where these data will be gathered for the task to be executed, and (iii) the routing paths to be followed; this is assuming that the selected datasets are transferred concurrently to the execution site. The algorithms or policies for selecting the data replicas, the data consolidating site and the corresponding paths comprise a Data Consolidation scheme. We propose and experimentally evaluate several DC schemes of polynomial number of operations that attempt to estimate the cost of the concurrent data transfers, to avoid congestion that may appear due to these transfers and to provide fault tolerance. Our simulation results strengthen our belief that DC is an important problem that needs to be addressed in the design of data grids, and can lead, if performed efficiently, to significant benefits in terms of task delay, network load and other performance parameters.
Abstract: Data collection is usually performed in wireless sensor networks by the sensors
relaying data towards a static control center (sink). Motivated by important
applications (mostly related to ambient intelligence and remote monitoring)
and as a first step towards introducing mobility, we propose the basic
idea of having a sink moving in the network area and collecting
data from sensors. We propose four characteristic mobility patterns
for the sink along with different data collection strategies. Through a
detailed simulation study, we evaluate several important performance properties of
each approach. Our findings demonstrate that by taking advantage
of the sink's mobility and shifting work from sensors to the powerful sink,
we can significantly reduce the energy spent in relaying traffic and thus greatly
extend the lifetime of the network.
Abstract: Through recent technology advances in the eld of wireless energy transmission, Wireless Rechargeable Sensor Networks
(WRSN) have emerged. In this new paradigm for
WSNs a mobile entity called Mobile Charger (MC) traverses
the network and replenishes the dissipated energy of sensors.
In this work we rst provide a formal denition of the charging
dispatch decision problem and prove its computational
hardness. We then investigate how to optimize the tradeo
s of several critical aspects of the charging process such
as a) the trajectory of the charger, b) the dierent charging
policies and c) the impact of the ratio of the energy
the MC may deliver to the sensors over the total available
energy in the network. In the light of these optimizations,
we then study the impact of the charging process to the
network lifetime for three characteristic underlying routing
protocols; a greedy protocol, a clustering protocol and an
energy balancing protocol. Finally, we propose a Mobile
Charging Protocol that locally adapts the circular trajectory
of the MC to the energy dissipation rate of each sub-region
of the network. We compare this protocol against several
MC trajectories for all three routing families by a detailed
experimental evaluation. The derived ndings demonstrate
signicant performance gains, both with respect to the no
charger case as well as the dierent charging alternatives; in
particular, the performance improvements include the network
lifetime, as well as connectivity, coverage and energy
balance properties.
Abstract: We call radiation at a point of a wireless network the total amount of electromagnetic quantity (energy or power density) the point is exposed to. The impact of radiation can be high and we believe it is worth studying and control; towards radiation aware wireless networking we take (for the first time in the study of this aspect) a distributed computing, algorithmic approach. We exemplify this line of research by focusing on sensor networks, studying the minimum radiation path problem of finding the lowest radiation trajectory of a person moving from a source to a destination point in the network region. For this problem, we sketch the main ideas behind a linear program that can provide a tight approximation of the optimal solution, and then we discuss three heuristics that can lead to low radiation paths. We also plan to investigate the impact of diverse node mobility to the heuristics' performance.
Abstract: Intuitively, Braess's paradox states that destroying a part
of a network may improve the common latency of selsh
ows at Nash
equilibrium. Such a paradox is a pervasive phenomenon in real-world
networks. Any administrator, who wants to improve equilibrium delays
in selsh networks, is facing some basic questions: (i) Is the network
paradox-ridden? (ii) How can we delete some edges to optimize equilibrium
ow delays? (iii) How can we modify edge latencies to optimize
equilibrium
ow delays?
Unfortunately, such questions lead to NP-hard problems in general. In
this work, we impose some natural restrictions on our networks, e.g.
we assume strictly increasing linear latencies. Our target is to formulate
ecient algorithms for the three questions above.We manage to provide:
{ A polynomial-time algorithm that decides if a network is paradoxridden,
when latencies are linear and strictly increasing.
{ A reduction of the problem of deciding if a network with arbitrary
linear latencies is paradox-ridden to the problem of generating all
optimal basic feasible solutions of a Linear Program that describes
the optimal trac allocations to the edges with constant latency.
{ An algorithm for nding a subnetwork that is almost optimal wrt
equilibrium latency. Our algorithm is subexponential when the number
of paths is polynomial and each path is of polylogarithmic length.
{ A polynomial-time algorithm for the problem of nding the best
subnetwork, which outperforms any known approximation algorithm
for the case of strictly increasing linear latencies.
{ A polynomial-time method that turns the optimal
ow into a Nash
ow by deleting the edges not used by the optimal
ow, and performing
minimal modications to the latencies of the remaining ones.
Our results provide a deeper understanding of the computational complexity
of recognizing the Braess's paradox most severe manifestations,
and our techniques show novel ways of using the probabilistic method
and of exploiting convex separable quadratic programs.
Abstract: Intuitively, Braess’s paradox states that destroying a part of a network may improve the common latency of selfish flows at Nash equilibrium. Such a paradox is a pervasive phenomenon in real-world networks. Any administrator who wants to improve equilibrium delays in selfish networks, is facing some basic questions:
– Is the network paradox-ridden?
– How can we delete some edges to optimize equilibrium flow delays?
– How can we modify edge latencies to optimize equilibrium flow delays?
Unfortunately, such questions lead to View the MathML sourceNP-hard problems in general. In this work, we impose some natural restrictions on our networks, e.g. we assume strictly increasing linear latencies. Our target is to formulate efficient algorithms for the three questions above. We manage to provide:
– A polynomial-time algorithm that decides if a network is paradox-ridden, when latencies are linear and strictly increasing.
– A reduction of the problem of deciding if a network with (arbitrary) linear latencies is paradox-ridden to the problem of generating all optimal basic feasible solutions of a Linear Program that describes the optimal traffic allocations to the edges with constant latency.
– An algorithm for finding a subnetwork that is almost optimal wrt equilibrium latency. Our algorithm is subexponential when the number of paths is polynomial and each path is of polylogarithmic length.
– A polynomial-time algorithm for the problem of finding the best subnetwork which outperforms any known approximation for the case of strictly increasing linear latencies.
– A polynomial-time method that turns the optimal flow into a Nash flow by deleting the edges not used by the optimal flow, and performing minimal modifications on the latencies of the remaining ones.
Our results provide a deeper understanding of the computational complexity of recognizing the most severe manifestations of Braess’s paradox, and our techniques show novel ways of using the probabilistic method and of exploiting convex separable quadratic programs.
Abstract: In this paper we consider communication issues arising in mobile networks that utilize Frequency Division Multiplexing (FDM) technology. In such networks, many users within the same geographical region can communicate simultaneously with other users of the network using distinct frequencies. The spectrum of available frequencies is limited; thus, efficient solutions to the frequency allocation and the call control problem are essential. In the frequency allocation problem, given users that wish to communicate, the objective is to minimize the required spectrum of frequencies so that communication can be established without signal interference. The objective of the call control problem is, given a spectrum of available frequencies and users that wish to communicate, to maximize the number of users served. We consider cellular, planar, and arbitrary network topologies. In particular, we study the on-line version of both problems using competitive analysis. For frequency allocation in cellular networks, we improve the best known competitive ratio upper bound of 3 achieved by the folklore Fixed Allocation algorithm, by presenting an almost tight competitive analysis for the greedy algorithm; we prove that its competitive ratio is between 2.429 and 2.5. For the call control problem, we present the first randomized algorithm that beats the deterministic lower bound of 3 achieving a competitive ratio of 2.934 in cellular networks. Our analysis has interesting extensions to arbitrary networks. Also, using Yao's Minimax Principle, we prove two lower bounds of 1.857 and 2.086 on the competitive ratio of randomized call control algorithms for cellular and arbitrary planar networks, respectively.
Abstract: In this paper we consider communication issues arising in cellular (mobile) networks that utilize frequency division multiplexing (FDM) technology. In such networks, many users within the same geographical region can communicate simultaneously with other users of the network using distinct frequencies. The spectrum of available frequencies is limited; thus, efficient solutions to the frequency-allocation and the call-control problems are essential. In the frequency-allocation problem, given users that wish to communicate, the objective is to minimize the required spectrum of frequencies so that communication can be established without signal interference. The objective of the call-control problem is, given a spectrum of available frequencies and users that wish to communicate, to maximize the number of users served. We consider cellular, planar, and arbitrary network topologies.
In particular, we study the on-line version of both problems using competitive analysis. For frequency allocation in cellular networks, we improve the best known competitive ratio upper bound of 3 achieved by the folklore Fixed Allocation algorithm, by presenting an almost tight competitive analysis for the greedy algorithm; we prove that its competitive ratio is between 2.429 and 2.5 . For the call-control problem, we present the first randomized algorithm that beats the deterministic lower bound of 3 achieving a competitive ratio between 2.469 and 2.651 for cellular networks. Our analysis has interesting extensions to arbitrary networks. Also, using Yao's Minimax Principle, we prove two lower bounds of 1.857 and 2.086 on the competitive ratio of randomized call-control algorithms for cellular and arbitrary planar networks, respectively.
Abstract: We present a set of three new time-dependent models with
increasing
exibility for realistic route planning in
flight networks. By
these means, we obtain small graph sizes while modeling airport procedures
in a realistic way. With these graphs, we are able to efficiently
compute a set of best connections with multiple criteria over a full day.
It even turns out that due to the very limited graph sizes it is feasible
to precompute full distance tables between all airports. As a result, best
connections can be retrieved in a few microseconds on real world data.
Abstract: We study the problem of localizing and tracking multiple moving targets in wireless sensor
networks, from a network design perspective i.e. towards estimating the least possible number
of sensors to be deployed, their positions and operation chatacteristics needed to perform the
tracking task. To avoid an expensive massive deployment, we try to take advantage of
possible coverage ovelaps over space and time, by introducing a novel combinatorial model
that captures such overlaps.
Under this model, we abstract the tracking network design problem by a combinatorial
problem of covering a universe of elements by at least three sets (to ensure that each point in
the network area is covered at any time by at least three sensors, and thus being localized). We
then design and analyze an efficient approximate method for sensor placement and operation,
that with high probability and in polynomial expected time achieves a (log n) approximation
ratio to the optimal solution. Our network design solution can be combined with alternative
collaborative processing methods, to suitably fit different tracking scenaria.
Abstract: We study the problem of localizing and tracking multiple moving targets in wireless sensor networks, from a network design perspective i.e. towards estimating the least possible number of sensors to be deployed, their positions and operation characteristics needed to perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps.
Under this model, we abstract the tracking network design problem by a combinatorial problem of covering a universe of elements by at least three sets (to ensure that each point in the network area is covered at any time by at least three sensors, and thus being localized). We then design and analyze an efficient approximate method for sensor placement and operation, that with high probability and in polynomial expected time achieves a {\`E}(logn) approximation ratio to the optimal solution. Our network design solution can be combined with alternative collaborative processing methods, to suitably fit different tracking scenarios.
Abstract: We study the important problem of tracking moving
targets in wireless sensor networks. We try to overcome the
limitations of standard state of the art tracking methods based on
continuous location tracking, i.e. the high energy dissipation and
communication overhead imposed by the active participation of
sensors in the tracking process and the low scalability, especially
in sparse networks. Instead, our approach uses sensors in a
passive way: they only record and judiciously spread information
about observed target presence in their vicinity; this information
is then used by the (powerful) tracking agent to locate the target
by just following the traces left at sensors. Our protocol is greedy,
local, distributed, energy efficient and very successful, in the
sense that (as shown by extensive simulations) the tracking agent
manages to quickly locate and follow the target; also, we achieve
good trade-offs between the energy dissipation and latency.
Abstract: We investigate the problem of ecient wireless energy recharging in Wireless Rechargeable Sensor Networks (WRSNs). In
such networks special mobile entities (called the Mobile Chargers) traverse the network and wirelessly replenish the energy
of sensor nodes. In contrast to most current approaches, we envision methods that are distributed and use limited network
information. We propose four new protocols for ecient recharging, addressing key issues which we identify, most notably (i)
what are good coordination procedures for the Mobile Chargers and (ii) what are good trajectories for the Mobile Chargers.
Two of our protocols (
DC,DCLK
) perform distributed, limited network knowledge coordination and charging, while two others
(
CC,CCGK
) perform centralized, global network knowledge coordination and charging. As detailed simulations demonstrate,
one of our distributed protocols outperforms a known state of the art method, while its performance gets quite close to the
performance of the powerful centralized global knowledge method.
Abstract: Orthogonal Frequency Division Multiplexing (OFDM)
has recently been proposed as a modulation technique for optical networks, because of its good spectral efficiency, flexibility, and tolerance to impairments. We consider the planning problem of an OFDM optical network, where we are given a traffic matrix that includes the requested transmission rates of the connections to be served. Connections are provisioned for their requested rate by elastically allocating spectrum using a variable number of OFDM subcarriers and choosing an appropriate modulation level, taking into account the transmission distance. We introduce the Routing, Modulation Level and Spectrum Allocation (RMLSA) problem, as opposed to the typical Routing and Wavelength Assignment (RWA) problem of traditional WDM networks, prove that is also NP-complete and present various algorithms to solve it. We start by presenting an optimal ILP RMLSA algorithm that minimizes the spectrum used to serve the traffic matrix, and also present a decomposition method that breaks RMLSA into its two
substituent subproblems, namely, (i) routing and modulation level, and (ii) spectrum allocation (RML+SA), and solves them sequentially. We also propose a heuristic algorithm that serves connections one-by-one and use it to solve the planning problem by sequentially serving all the connections in the traffic matrix. In the sequential algorithm, we investigate two policies for defining the order in which connections are considered. We also use a simulated annealing meta-heuristic to obtain even better orderings. We examine the performance of the proposed algorithms through simulation experiments and evaluate the spectrum utilization benefits that can be obtained by utilizing OFDM elastic bandwidth allocation, when compared to a traditional WDM network.
Abstract: One of the most important applications of wireless sensor
networks is building monitoring and more specically, the
early detection of emergency events and the provision of
guidance for safe evacuation of the building. In this pa-
per, we describe a demo application that, in the event of a
re inside a monitored building, uses the information from
the deployed sensor network in order to nd the shortest
safest path away from the emergency and provides naviga-
tion guidance to the occupants (modelled by a mobile robot),
in order to safely evacuate the building. For this demo, we
developed our own ad-hoc robot-sensor interconnection us-
ing expansion connectors and programming in a low-level
language.
Abstract: Modern Wireless Sensor Networks offer an easy, lowcost
and reliable alternative to the back-end for monitoring
and controlling large geographical areas like Buildings
and Industries. We present the design and implementation details of an open and efficient Prototype System as a solution for low-cost BMS that comprises of heterogeneous, small-factor wireless devices. Placing that in the context of Internet of Things we come up with a solution that can cooperate with other systems installed on the same site to lower power consumption and costs as well as benefit humans that use its services in an transparent way. We evaluate and assess key aspects of the performance of our prototype. Our findings indicate specific approaches
to reduce the operation costs and allow the development of open applications.
Abstract: We study the problem of energy-balanced data propagation
in wireless sensor networks. The energy balance property
guarantees that the average per sensor energy dissipation
is the same for all sensors in the network, during
the entire execution of the data propagation protocol. This
property is important since it prolongs the network˘s lifetime
by avoiding early energy depletion of sensors.
We propose a new algorithm that in each step decides
whether to propagate data one-hop towards the final destination
(the sink), or to send data directly to the sink. This
randomized choice balances the (cheap) one-hop transimssions
with the direct transimissions to the sink, which are
more expensive but “bypass” the sensors lying close to the
sink. Note that, in most protocols, these close to the sink
sensors tend to be overused and die out early.
By a detailed analysis we precisely estimate the probabilities
for each propagation choice in order to guarantee
energy balance. The needed estimation can easily be performed
by current sensors using simple to obtain information.
Under some assumptions, we also derive a closed form
for these probabilities.
The fact (shown by our analysis) that direct (expensive)
transmissions to the sink are needed only rarely, shows that
our protocol, besides energy-balanced, is also energy efficient.
Abstract: We study the problem of energy-balanced data propagation in wireless sensor networks. The energy balance property guarantees that the average per sensor energy dissipation is the same for all sensors in the network, during the entire execution of the data propagation protocol. This property is important since it prolongs the network˘:s lifetime by avoiding early energy depletion of sensors.
We propose a new algorithm that in each step decides whether to propagate data one-hop towards the final destination (the sink), or to send data directly to the sink. This randomized choice balances the (cheap) one-hop transimssions with the direct transimissions to the sink, which are more expensive but “bypass” the sensors lying close to the sink. Note that, in most protocols, these close to the sink sensors tend to be overused and die out early.
By a detailed analysis we precisely estimate the probabilities for each propagation choice in order to guarantee energy balance. The needed estimation can easily be performed by current sensors using simple to obtain information. Under some assumptions, we also derive a closed form for these probabilities.
The fact (shown by our analysis) that direct (expensive) transmissions to the sink are needed only rarely, shows that our protocol, besides energy-balanced, is also energy efficient.
Abstract: The energy balance property (i.e., all nodes having the same energy throughout the network evolution) contributes significantly (along with energy efficiency) to the maximization of the network lifespan and network connectivity. The problem of achieving energy balanced propagation is well studied in static networks, as it has attracted a lot of research attention.
Recent technological advances have enabled sensor devices to be attached to mobile entities of our every day life (e.g. smart-phones, cars, PDAs etc), thus introducing the formation of highly mobile sensor networks.
Inspired by the aforementioned applications, this work is (to the best of our knowledge) the first studying the energy balance property in wireless networks where the nodes are highly and dynamically mobile. In particular, in this paper we propose a new diverse mobility model which is easily parameterized and we also present a new protocol which tries to adaptively exploit the inherent node mobility in order to achieve energy balance in the network in an efficient way.
Abstract: Wireless Sensor Networks are comprised of a vast number of ultra-small, autonomous computing and communication devices, with restricted energy, that co-operate to accomplish a large sensing task. In this work: a) We propose extended versions of two data propagation protocols for such networks: the Sleep-Awake Probabilistic Forwarding Protocol (SW-PFR) and the Hierarchical Threshold sensitive Energy Efficient Network protocol (H-TEEN). These non-trivial extensions improve the performance of the original protocols, by introducing sleep-awake periods in the PFR protocol to save energy, and introducing a hierarchy of clustering in the TEEN protocol to better cope with large networks, b) We implemented the two protocols and performed an extensive simulation comparison of various important measures of their performance with a focus on energy consumption, c) We investigate in detail the relative advantages and disadvantages of each protocol, d) We discuss a possible hybrid combination of the two protocols towards optimizing certain goals.
Abstract: In this work we study energy efficient routing strategies
for wireless ad-hoc networks. In this kind of networks,
energy is a scarce resource and its conservation
and efficient use is a major issue. Our strategy follows
the multi-cost routing approach, according to which a
cost vector of various parameters is assigned to each
link. The parameters of interest are the number of hops
on a path, and the residual energy and the transmission
power of the nodes on the path. These parameters
are combined in various optimization functions,
corresponding to different routing algorithms, for selecting
the optimal path. We evaluate the routing algorithms
proposed in a number of scenarios, with respect
to energy consumption, throughput and other performance
parameters of interest. From the experiments
conducted we conclude that routing algorithms that take
into account energy related parameters, increase the
lifetime of the network, while achieving better performance
than other approaches, such as minimum hop
routing.
Abstract: In this work, we propose an energy-efficient multicasting algorithm
for wireless networks for the case where the transmission
powers of the nodes are fixed. Our algorithm is
based on the multicost approach and selects an optimal
energy-efficient set of nodes for multicasting, taking into account:
i) the node residual energies, ii) the transmission
powers used by the nodes, and iii) the set of nodes covered.
Our algorithm is optimal, in the sense that it can
optimize any desired function of the total power consumed
by the multicasting task and the minimum of the current
residual energies of the nodes, provided that the optimization
function is monotonic in each of these parameters. Our
optimal algorithm has non-polynomial complexity, thus, we
propose a relaxation producing a near-optimal solution in
polynomial time. The performance results obtained show
that the proposed algorithms outperform established solutions
for energy-aware multicasting, with respect to both
energy consumption and network lifetime. Moreover, it is
shown that the near-optimal multicost algorithm obtains
most of the performance benefits of the optimal multicost
algorithm at a smaller computational overhead.
Abstract: A crucial issue in wireless networks is to support efficiently communication patterns that are typical in traditional (wired) networks. These include broadcasting, multicasting, and gossiping (all-to-all communication). In this work we study such problems in static ad hoc networks. Since, in ad hoc networks, energy is a scarce resource, the important engineering question to be solved is to guarantee a desired communication pattern minimizing the total energy consumption. Motivated by this question, we study a series of wireless network design problems and present new approximation algorithms and inapproximability results.
Abstract: Many efforts have been done in the last years to model public transport timetables in order to
find optimal routes. The proposed models can be classified into two types: those representing the
timetable as an array, and those representing it as a graph. The array-based models have been
shown to be very effective in terms of query time, while the graph-based models usually answer
queries by computing shortest paths, and hence they are suitable to be used in combination with
speed-up techniques developed for road networks.
In this paper, we focus on the dynamic behavior of graph-based models considering the case
where transportation systems are subject to delays with respect to the given timetable. We
make three contributions: (i) we give a simplified and optimized update routine for the wellknown
time-expanded model along with an engineered query algorithm; (ii) we propose a new
graph-based model tailored for handling dynamic updates; (iii) we assess the effectiveness of
the proposed models and algorithms by an experimental study, which shows that both models
require negligible update time and a query time which is comparable to that required by some
array-based models.
Abstract: In this paper, we demonstrate the significant impact of (a) the mobility rate and (b) the user density on the performance of routing protocols in ad-hoc mobile networks. In particular, we study the effect of these parameters on two different approaches for designing routing protocols: (a) the route creation and maintenance approach and (b) the support approach that forces few hosts to move, acting as helpers for message delivery. We study one representative protocol for each approach, i.e. AODV for the first approach and RUNNERS for the second. We have implemented the two protocols and performed a large scale and detailed simulation study of their performance. The main findings are: the AODV protocol behaves well in networks of high user density and low mobility rate, while its performance drops for sparse networks of highly mobile users. On the other hand, the RUNNERS protocol seems to tolerate well (and in fact benefit from) high mobility rates and low densities.
Abstract: Energy is a scarce resource in ad hoc wireless networks and it
is of paramount importance to use it e±ciently when establishing com-
munication patterns. In this work we study algorithms for computing
energy-e±cient multicast trees in ad hoc wireless networks. Such algo-
rithms either start with an empty solution which is gradually augmented
to a multicast tree (augmentation algorithms) or take as input an initial
multicast tree and `walk' on di®erent multicast trees for a Żnite number
of steps until some acceptable decrease in energy consumption is achieved
(local search algorithms).
We mainly focus on augmentation algorithms and in particular we have
implemented a long list of existing such algorithms in the literature and
new ones. We experimentally compare all these algorithms on random
geometric instances of the problem and obtain results in terms of the
energy e±ciency of the solutions obtained. Additional results concerning
the running time of our implementations are also presented. We also ex-
plore how much the solutions obtained by augmentation algorithms can
be improved by local search algorithms. Our results show that one of our
new algorithms and its variations achieve the most energy-e±cient solu-
tions while being very fast. Our investigations shed some light to those
properties of geometric instances of the problem which make augmenta-
tion algorithms perform well.
Abstract: An enhanced impairment-aware path computation element (EPCE) for dynamic
transparent optical networks is proposed and experimentally evaluated. The obtained results show
that by using the EPCE, light-path setup times of few seconds are achieved.
Abstract: Core optical networks using reconfigurable optical
switches and tunable lasers appear to be on the road towards
widespread deployment and could evolve to all-optical mesh
networks in the coming future. Considering the impact of physical
layer impairments in the planning and operation of all-optical
(and translucent) networks is the main focus of the DICONET
project. The impairment aware network planning and operation
tool (NPOT) is the main outcome of DICONET project, which
is explained in detail in this paper. The key building blocks of
the NPOT, consisting of network description repositories, the
physical layer performance evaluator, the impairment aware
routing and wavelength assignment engines, the component
placement modules, failure handling and the integration of
NPOT in the control plane are the main contributions of this
work. Besides, the experimental result of DICONET proposal for
centralized and distributed control plane integration schemes and
the performance of the failure handling in terms of restoration
time is presented in this work.
Abstract: This paper evaluates a centralised impairment-aware path restoration approach for GMPLScontrolled
transparent optical networks. Experimental results on a 14-node network test-bed show successful
QoT compliant path restoration of around 3.6 seconds.
Abstract: In large scale networks users often behave selfishly trying to minimize their routing cost. Modelling this as a noncooperative game, may yield a Nash equilibrium with unboundedly poor network performance. To measure this inefficacy, the Coordination Ratio or Price of Anarchy (PoA) was introduced. It equals the ratio of the cost induced by the worst Nash equilibrium, to the corresponding one induced by the overall optimum assignment of the jobs to the network. On improving the PoA of a given network, a series of papers model this selfish behavior as a Stackelberg or Leader-Followers game.
We consider random tuples of machines, with either linear or M/M/1 latency functions, and PoA at least a tuning parameter c. We validate a variant (NLS) of the Largest Latency First (LLF) Leaderrsquos strategy on tuples with PoA ge c. NLS experimentally improves on LLF for systems with inherently high PoA, where the Leader is constrained to control low portion agr of jobs. This suggests even better performance for systems with arbitrary PoA. Also, we bounded experimentally the least Leaderrsquos portion agr0 needed to induce optimum cost. Unexpectedly, as parameter c increases the corresponding agr0 decreases, for M/M/1 latency functions. All these are implemented in an extensive Matlab toolbox.
Abstract: We investigate the efficiency of protocols for basic communication in ad-hoc mobile networks. All protocols are classified in the semi-compulsory, relay category according to which communication is achieved through a small team of mobile hosts, called the support, which move in a predetermined way and serves as an intermediate pool for receiving and delivering messages. We implement a new semi-compulsory, relay protocol in which the motion of the support is based on a so-called ``hunter" strategy developed for a pursuit-evasion game. We conduct an extensive, comparative experimental study of this protocol with other two existing protocols, each one possessing a different motion for its support. We considered several types of inputs, including among others two kinds of motion patterns (random and adversarial) for the mobile hosts not in the support. Our experiments showed that for all protocols the throughput scales almost linearly with the number of mobile hosts in the network, and that a small support size suffices for efficient communication. An interesting outcome is that in most cases the new protocol is inferior to the other two, although it has a global knowledge of the motion space.
Abstract: We investigate the impact of multiple, mobile sinks on
efficient data collection in wireless sensor networks. To
improve performance, our protocol design focuses on minimizing
overlaps of sink trajectories and balancing the service load
among the sinks. To cope with high network dynamics, placement
irregularities and limited network knowledge we propose three different
protocols: a) a centralized one, that explicitly equalizes spatial coverage;
this protocol assumes strong modeling assumptions, and also serves as a kind
of performance lower bound in uniform networks of low dynamics b)
a distributed protocol based on mutual avoidance of sinks c) a clustering
protocol that distributively groups service areas towards balancing the load per sink.
Our simulation findings demonstrate significant gains in latency, while keeping the success
rate and the energy dissipation at very satisfactory levels even under
high network dynamics and deployment heterogeneity.
Abstract: We study the problem of fast and energy-efficient
data collection of sensory data using a mobile sink, in wireless sensor networks in which both the sensors and the sink move. Motivated by relevant applications, we focus on dynamic sensory
mobility and heterogeneous sensor placement. Our approach basically suggests to exploit the sensor motion to adaptively propagate information based on local conditions (such as high placement concentrations), so that the sink gradually ”learns”
the network and accordingly optimizes its motion. Compared to relevant solutions in the state of the art (such as the blind random walk, biased walks, and even optimized deterministic sink mobility), our method significantly reduces latency (the improvement ranges from 40% for uniform placements, to 800% for heterogeneous ones), while also improving the success rate and keeping the energy dissipation at very satisfactory levels.
Abstract: We propose a new data dissemination protocol for wireless sensor networks, that basically pulls some additional knowledge about the network in order to subsequently improve data forwarding towards the sink. This extra information is still local, limited and obtained in a distributed manner. This extra knowledge is acquired by only a small fraction of sensors thus the extra energy cost only marginally affects the overall protocol efficiency. The new protocol has low latency and manages to propagate data successfully even in the case of low densities. Furthermore, we study in detail the effect of failures and show that our protocol is very robust. In particular, we implement and evaluate the protocol using large scale simulation, showing that it significantly outperforms well known relevant solutions in the state of the art.
Abstract: Motivated by the problem of allocating optical bandwidth in tree–shaped WDM networks, we study the fractional path coloring problem in trees. We consider the class of locally-symmetric sets of paths on binary trees and prove that any such set of paths has a fractional coloring of cost at most 1.367L, where L denotes the load of the set of paths. Using this result, we obtain a randomized algorithm that colors any locally-symmetric set of paths of load L on a binary tree (with reasonable restrictions on its depth) using at most 1.367L+o(L) colors, with high probability.
Abstract: We discuss two different ways of having fun with two different kinds of games: On the one hand, we present a framework for developing multiplayer pervasive games that rely on the use of mobile sensor networks. On the other hand, we show how to exploit game theoretic concepts in order to study the graph-theoretic problem of vertex coloring.
Abstract: The technological as well as software advances in
microelectronics and embedded component design have led to the
development of low cost, small-sized devices capable of forming
wireless, ad-hoc networks and sensing a number of qualities of
their environment, while performing computations that depend
on the sensed qualities as well as information received by their
peers. These sensor networks rely on the collective power of
the separate devices as well as their computational and sensing
capabilities to understand "global" environmental states through
locally sampled information and local sensor interactions. Due
to the locality of the sensor networks, that naturally arises due
to the locality of their communications capabilities, a number
of interesting connections exist between these networks and
geometrical concepts and problems. In this paper we study two
simple problems that pertain to the formation of low power
and low interference communication patterns in fixed topology
sensor networks. We study the problem of using multihop
communication links instead of direct ones as well as the problem
of forming a communication ring of sensor networks so as to
reduce power consumption as well as interference from other
nodes. Our focus is on the connection between sensor networks
and geometrical concepts, rather than on practicality, so as to
highlight their interrelationship.
Abstract: A fundamental approach in finding efficiently best routes or optimal itineraries in traffic information
systems is to reduce the search space (part of graph visited) of the most commonly used
shortest path routine (Dijkstra˘s algorithm) on a suitably defined graph. We investigate reduction
of the search space while simultaneously retaining data structures, created during a preprocessing
phase, of size linear (i.e., optimal) to the size of the graph. We show that the search space of
Dijkstra˘s algorithm can be significantly reduced by extracting geometric information from a given
layout of the graph and by encapsulating precomputed shortest-path information in resulted geometric
objects (containers). We present an extensive experimental study comparing the impact of
different types of geometric containers using test data from real-world traffic networks. We also
present new algorithms as well as an empirical study for the dynamic case of this problem, where
edge weights are subject to change and the geometric containers have to be updated and show that
our new methods are two to three times faster than recomputing everything from scratch. Finally,
in an appendix, we discuss the software framework that we developed to realize the implementations
of all of our variants of Dijkstra˘s algorithm. Such a framework is not trivial to achieve as our
goal was to maintain a common code base that is, at the same time, small, efficient, and flexible,
as we wanted to enhance and combine several variants in any possible way.
Abstract: Building efficient internet-scale data management services is the main focus of this chapter. In particular, we aim to show how to leverage DHT technology and extend it with novel algorithms and architectures in order to (i) improve efficiency and reliability for traditional DHT (exact-match) queries, particularly exploiting the abundance of altruism witnessed in real-life P2P networks, (ii) speedup range queries for data stored on DHTs, and (iii) support efficiently and scalably the publish/subscribe paradigm over DHTs, which crucially depends on algorithms for supporting rich queries on string-attribute data.
Abstract: Information retrieval (IR) in peer-to-peer (P2P) networks,
where the corpus is spread across many loosely coupled
peers, has recently gained importance. In contrast to IR
systems on a centralized server or server farm, P2P IR faces
the additional challenge of either being oblivious to global
corpus statistics or having to compute the global measures
from local statistics at the individual peers in an efficient,
distributed manner. One specific measure of interest is the
global document frequency for different terms, which would
be very beneficial as term-specific weights in the scoring and
ranking of merged search results that have been obtained
from different peers.
This paper presents an efficient solution for the problem
of estimating global document frequencies in a large-scale
P2P network with very high dynamics where peers can join
and leave the network on short notice. In particular, the
developed method takes into account the fact that the lo-
cal document collections of autonomous peers may arbitrar-
ily overlap, so that global counting needs to be duplicate-
insensitive. The method is based on hash sketches as a
technique for compact data synopses. Experimental stud-
ies demonstrate the estimator?s accuracy, scalability, and
ability to cope with high dynamics. Moreover, the benefit
for ranking P2P search results is shown by experiments with
real-world Web data and queries.
Abstract: We consider the offline version of the routing and
wavelength assignment (RWA) problem in transparent all-optical networks. In such networks and in the absence of regenerators, the signal quality of transmission degrades due to physical layer
impairments. We initially present an algorithm for solving the static RWA problem based on an LP relaxation formulation that tends to yield integer solutions. To account for signal degradation due to physical impairments, we model the effects of the path length, the path hop count, and the interference among ligthpaths by imposing additional (soft) constraints on RWA. The objective of the resulting optimization problem is not only to serve the
connection requests using the available wavelengths, but also to minimize the total accumulated signal degradation on the selected lightpaths. Our simulation studies indicate that the proposed RWA algorithms select the lightpaths for the requested connections so as to avoid impairment generating sources, thus dramatically reducing the overall physical-layer blocking when compared to RWA algorithms that do not account for impairments.
Abstract: In this work we study the implementation of multicost rout-
ing in a distributed way in wireless mobile ad hoc networks.
In contrast to traditional single-cost routing, where each
path is characterized by a scalar, in multicost routing a
vector of cost parameters is assigned to each network link,
from which the cost vectors of candidate paths are calcu-
lated. These parameters are combined in various optimiza-
tion functions, corresponding to different routing algorithms,
for selecting the optimal path. Up until now the performance
of multicost and multi-constrained routing in wireless ad hoc
networks has been evaluated either at a theoretical level or
by assuming that nodes are static and have full knowledge
of the network topology and nodes� state. In the present
paper we assess the performance of multicost routing based
on energy-related parameters in mobile ad hoc networks by
embedding its logic in the Dynamic Source Routing (DSR)
algorithm, which is a well-known fully distributed routing
algorithm. We use simulations to compare the performance
of the multicost-DSR algorithm to that of the original DSR
algorithm and examine their behavior under various node
mobility scenarios. The results confirm that the multicost-
DSR algorithm improves the performance of the network in
comparison to the original DSR algorithm in terms of energy efficiency. The multicost-DSR algorithm enhances the
performance of the network not only by reducing energy
consumption overall in the network, but also by spreading
energy consumption more uniformly across the network, pro
longing the network lifetime and reducing the packet drop
probability. Furthermore the delay suffered by the packets
reaching their destination for the case of the multicost-DSR
algorithm is shown to be lower than in the case of the orig
inal DSR algorithm.
Abstract: We present improved methods for computing a set of alternative source-to-destination routes
in road networks in the form of an alternative graph. The resulting alternative graphs are
characterized by minimum path overlap, small stretch factor, as well as low size and complexity.
Our approach improves upon a previous one by introducing a new pruning stage preceding any
other heuristic method and by introducing a new filtering and fine-tuning of two existing methods.
Our accompanying experimental study shows that the entire alternative graph can be computed
pretty fast even in continental size networks.
Abstract: We consider the online impairment-aware routing
and wavelength assignment (IA-RWA) problem in transparent
WDM networks. To serve a new connection, the online algorithm,
in addition to finding a route and a free wavelength (a lightpath),
has to guarantee its transmission quality, which is affected by
physical-layer impairments. Due to interference effects, the establishment
of the new lightpath affects and is affected by the other
lightpaths. We present two multicost algorithms that account
for the actual current interference among lightpaths, as well as
for other physical effects, performing a cross-layer optimization
between the network and physical layers. In multicost routing,
a vector of cost parameters is assigned to each link, from which
the cost vectors of the paths are calculated. The first algorithm
utilizes cost vectors consisting of impairment-generating source
parameters, so as to be generic and applicable to different physical
settings. These parameters are combined into a scalar cost
that indirectly evaluates the quality of candidate lightpaths. The
second algorithm uses specific physical-layer models to define
noise variance-related cost parameters, so as to directly calculate
the -factor of candidate lightpaths. The algorithms find a set of
so-called nondominated paths to serve the connection in the sense
that no path is better in the set with respect to all cost parameters.
To select the lightpath, we propose various optimization functions
that correspond to different IA-RWA algorithms. The proposed
algorithms combine the strength of multicost optimization with
low execution times, making them appropriate for serving online
connections
Abstract: We present a unique location-based application of sensor networks. The ``Magnetize Words'' installation produces word clouds following visitors around, interacting with each other as the visitors meet in physical space. The installation can be based on either of two localization-providing platforms. The first achieves high localization accuracy while requiring visitors to carry identifying tags. The other provides low accuracy, but is fully nonintrusive.
Abstract: We propose efficient schemes for information-theoretically secure
key exchange in the Bounded Storage Model (BSM), where the adversary
is assumed to have limited storage. Our schemes generate a
secret One Time Pad (OTP) shared by the sender and the receiver,
from a large number of public random bits produced by the sender
or by an external source. Our schemes initially generate a small
number of shared secret bits, using known techniques. We introduce
a new method to expand a small number of shared bits to a
much longer, shared key.
Our schemes are tailored to the requirements of sensor nodes
and wireless networks. They are simple, efficient to implement and
take advantage of the fact that practical wireless protocols transmit
data in frames, unlike previous protocols, which assume access to
specific bits in a stream of data. Indeed, our main contribution is
twofold.
On the one hand, we construct schemes that are attractive in
terms of simplicity, computational complexity, number of bits read
from the shared random source and expansion factor of the initial
key to the final shared key.
On the other hand, we show how to transformany existing scheme
for key exchange in BSM into a more efficient scheme in the number
of bits it reads from the shared source, given that the source is
transmitted in frames.
Abstract: In this work, we study the impact of the dynamic changing of the network link capacities on the stability properties of packet-switched networks. Especially, we consider the Adversarial, Quasi-Static Queuing Theory model, where each link capacity may take on only two possible (integer) values, namely 1 and C>1 under a (w,\~{n})-adversary. We obtain the following results:
• Allowing such dynamic changes to the link capacities of a network with just ten nodes that uses the LIS (Longest-in-System) protocol for contention–resolution results in instability at rates View the MathML source and for large enough values of C.
• The combination of dynamically changing link capacities with compositions of contention–resolution protocols on network queues suffices for similar instability bounds: The composition of LIS with any of SIS (Shortest-in-System), NTS (Nearest-to-Source), and FTG (Furthest-to-Go) protocols is unstable at rates View the MathML source for large enough values of C.
• The instability bound of the network subgraphs that are forbidden for stability is affected by the dynamic changes to the link capacities: we present improved instability bounds for all the directed subgraphs that were known to be forbidden for stability on networks running a certain greedy protocol.
Abstract: Wireless networks are nowadays widely used and constantly evolving; we are constantly surrounded by numerous such networks. A modern idea is to exploit this variety and aggregate all available connections together towards improving Internet connectivity. In this paper we present our design of a networking system that enables participating entities to connect with each other and share their possible broadband connections. Every connected entity will gain broadband connection access from the ones that have and share one, independently from how many connections are available and how fast they are. Our goal is to improve the broadband connections' utilization and aim towards fault tolerance on these connections. To this end, the system is built on top of a peer-to-peer network, where the participating entities share their connections according to various scheduling algorithms.
Abstract: With this work we aim to make a three-fold contribution.
We first address the issue of supporting efficiently queries
over string-attributes involving prefix, suffix, containment,
and equality operators in large-scale data networks. Our
first design decision is to employ distributed hash tables
(DHTs) for the data network?s topology, harnessing their
desirable properties. Our next design decision is to derive
DHT-independent solutions, treating DHT as a black box.
Second, we exploit this infrastructure to develop efficient
content based publish/subscribe systems. The main con-
tribution here are algorithms for the efficient processing of
queries (subscriptions) and events (publications). Specifi-
cally, we show that our subscription processing algorithms
require O(logN) messages for a N-node network, and our
event processing algorithms require O(l ? logN) messages
(with l being the average string length).
Third, we develop algorithms for optimizing the proces-
sing of multi-dimensional events, involving several string at-
tributes. Further to our analysis, we provide simulation-
based experiments showing promising performance results
in terms of number of messages, required bandwidth, load
balancing, and response times.
Abstract: Information retrieval (IR) in peer-to-peer (P2P) networks,
where the corpus is spread across many loosely coupled
peers, has recently gained importance. In contrast to IR
systems on a centralized server or server farm, P2P IR faces
the additional challenge of either being oblivious to global
corpus statistics or having to compute the global measures
from local statistics at the individual peers in an efficient,
distributed manner. One specific measure of interest is the
global document frequency for different terms, which would
be very beneficial as term-specific weights in the scoring and
ranking of merged search results that have been obtained
from different peers.
This paper presents an efficient solution for the problem
of estimating global document frequencies in a large-scale
P2P network with very high dynamics where peers can join
and leave the network on short notice. In particular, the
developed method takes into account the fact that the lo-
cal document collections of autonomous peers may arbitrar-
ily overlap, so that global counting needs to be duplicate-
insensitive. The method is based on hash sketches as a
technique for compact data synopses. Experimental stud-
ies demonstrate the estimator?s accuracy, scalability, and
ability to cope with high dynamics. Moreover, the benefit
for ranking P2P search results is shown by experiments with
real-world Web data and queries.
Abstract: In this work we study the combination of multicost
routing and adjustable transmission power in wireless
ad hoc networks, so as to obtain dynamic energy- and
interference-efficient routes to optimize network performance.
In multi-cost routing, a vector of cost parameters is
assigned to each network link, from which the cost vectors
of candidate paths are calculated. Only at the end these
parameters are combined in various optimization functions,
corresponding to different routing algorithms, for selecting
the optimal path. The multi-cost routing problem is a
generalization of the multi-constrained problem, where no
constraints exist, and is also significantly more powerful
than single-cost routing. Since energy is an important
limitation of wireless communications, the cost parameters
considered are the number of hops, the interference caused,
the residual energy and the transmission power of the
nodes on the path; other parameters could also be included,
as desired. We assume that nodes can use power control to
adjust their transmission power to the desired level. The
experiments conducted show that the combination of multicost
routing and adjustable transmission power can lead to
reduced interference and energy consumption, improving
network performance and lifetime.
Abstract: In translucent (or managed reach) WDM optical
networks, regenerators are employed at specific nodes. Some of
the connections in such networks are routed transparently, while
others have to go through a sequence of 3R regenerators that serve
as “refueling stations” to restore their quality of transmission
(QoT). We extend an online multicost algorithm for transparent
networks presented in our previous study [1], to obtain an IA-RWA
algorithm that works in translucent networks and makes use,
when required, of the regenerators present at certain locations
of the network. To characterize a path, the algorithm uses a
multicost formulation with several cost parameters, including the
set of available wavelengths, the length of the path, the number of
regenerators used, and noise variance parameters that account for
the physical layer impairments. Given a new connection request
and the current utilization state of the network, the algorithm calculates
a set of non dominated candidate paths, meaning that any
path in this set is not inferior with respect to all cost parameters
than any other path. This set consists of all the cost-effective (in
terms of the domination relation) and feasible (in terms of QoT)
lightpaths for the given source-destination pair, including all the
possible combinations for the utilization of available regenerators
of the network. An optimization function or policy is then applied
to this set in order to select the optimal lightpath. Different optimization
policies correspond to different IA-RWA algorithms.
We propose and evaluate several optimization policies, such as the
most used wavelength, the best quality of transmission, the least
regeneration usage, or a combination of these rules. Our results
indicate that in a translucent network the employed IA-RWA
algorithm has to consider all problem parameters, namely, the
QoT of the lightpaths, the utilization of wavelengths and the
availability of regenerators, to efficiently serve the online traffic.
Abstract: Wireless sensor networks can be very useful in applications that require the detection of crucial events, in physical environments subjected to critical conditions, and the propagation of data reporting their realization to a control center. In this paper we propose jWebDust, a generic and modular application environment for developing and managing applications that are based on wireless sensor networks. Our software architecture provides a range of services that allow to create customized applications with minimum implementation effort that are easy to administrate. We move beyond the ?networking-centric? view of sensor network research and focus on how the end user (administrator, control center supervisor, etc.) will visualize and interact with the system.
We here present its open architecture, the most important design decisions, and discuss its distinct features and functionalities. jWebDust allows heterogeneous components to interoperate (real world sensor networks will rarely be homogeneous) and allows the integrated management and control of multiple such networks by also defining web-based mechanisms to visualize the network state, the results of queries, and a means to inject queries in the network. The architecture also illustrates how existing protocols for various services can interoperate in a bigger framework - such as the tree construction, query routing, etc.
Abstract: In this book chapter we will consider key establishment protocols for wireless sensor networks.
Several protocols have been proposed in the literature for the establishment of a shared group key for wired networks.
The choice of a protocol depends whether the key is established by one of the participants (and then transported to the other(s)) or agreed among the participants, and on the underlying cryptographic mechanisms (symmetric or asymmetric). Clearly, the design of key establishment protocols for sensor networks must deal with different problems and challenges that do not exist in wired networks. To name a few, wireless links are particularly vulnerable to eavesdropping, and that sensor devices can be captured (and the secrets they contain can be compromised); in many upcoming wireless sensor networks, nodes cannot rely on the presence of an online trusted server (whereas most standardized authentication and key establishment protocols do rely on such a server).
In particular, we will consider five distributed group key establishment protocols. Each of these protocols applies a different algorithmic technique that makes it more suitable for (i) static sensor networks, (ii) sensor networks where nodes enter sleep mode (i.e. dynamic, with low rate of updates on the connectivity graph) and (iii) fully dynamic networks where nodes may even be mobile. On the other hand, the common factor for all five protocols is that they can be applied in dynamic groups (where members can be excluded or added) and provide forward and backward secrecy. All these protocols are based on the Diffie-Hellman key exchange algorithm and constitute natural extensions of it in the multiparty case.
Abstract: In this paper we study deliberate attacks on the infrastructure of large scale-free networks. These attacks are based on the importance of individual vertices in the network in order to be successful, and the concept of centrality (originating from social science) has already been utilized in their study with success. Some measures of centrality however, as betweenness, have disadvantages that do not facilitate the research in this area. We show that with the aid of scale-free network characteristics such as the clustering coefficient we can get results that balance the current centrality measures, but also gain insight into the workings of these networks.
Abstract: In this paper we examine the problem of searching for some information item in the nodes of a fully
interconnected computer network, where each node contains information relevant to some topic
as well as links to other network nodes that also contain information, not necessarily related to
locally kept information. These links are used to facilitate the Internet users and mobile software
agents that try to locate specific pieces of information. However, the links do not necessarily point
to nodes containing information of interest to the user or relevant to the aims of the mobile agent.
Thus an element of uncertainty is introduced. For example, when an Internet user or some search
agent lands on a particular network node, they see a set of links that point to information that is,
supposedly, relevant to the current search. Therefore, we can assume that a link points to relevant
information with some unknown probability p that, in general, is related to the number of nodes
in the network (intuitively, as the network grows, this probability tends to zero since adding more
nodes to the network renders some extant links less accurate or obsolete). Consequently, since there
is uncertainty as to whether the links contained in a node?s Web page are correct or not, a search
algorithm cannot rely on following the links systematically since it may end up spending too much
time visiting nodes that contain irrelevant information. In this work, we will describe and analyze
a search algorithm that is only allowed to transfer a fixed amount of memory along communication
links as it visits the network nodes. The algorithm is, however, allowed to use one bit of memory at
each node as an ?already visited? flag. In this way the algorithm has its memory distributed to the
network nodes, avoiding overloading the network links as it moves from node to node searching for
the information. We work on fully interconnected networks for simplicity reasons and, moreover,
because according to some recent experimental evidence, such networks can be considered to be a
good approximation of the current structure of the World Wide Web.
Abstract: We extend here the Population Protocol model of Angluin et al. [2004,2006] in order to model more powerful networks of resource-limited agents that are possibly mobile. The main feature of our extended model, called the Mediated Population Protocol (MPP) model, is to allow edges of the communication graph to have states that belong to a constant size set. We then allow the protocol rules for pairwise interactions to modify the corresponding edge state. Protocol specifications preserve both uniformity and anonymity. We first focus on the computational power of the MPP model on complete communication graphs and initially identical edges. We provide the following exact characterization for the class MPS of stably computable predicates: A predicate is in MPS iff it is symmetric and is in NSPACE(n^2)$. We finally ignore the input to the agents and study MPP's ability to compute graph properties.
Abstract: We extend here the Population Protocol model of Angluin et al. [2004] in order to model more powerful networks of very small resource-limited artefacts (agents) that are possibly mobile. Communication can happen only between pairs of artefacts. A communication graph (or digraph) denotes the permissible pairwise interactions. The main feature of our extended model is to allow edges of the communication graph, G, to have states that belong to a constant size set. We also allow edges to have readable only costs, whose values also belong to a constant size set. We then allow the protocol rules for pairwise interactions to modify the corresponding edge state. Thus, our protocol specifications are still independent of the population size and do not use agent ids, i.e. they preserve scalability, uniformity and anonymity. Our Mediated Population Protocols (MPP) can stably compute graph properties of the communication graph. We show this for the properties of maximal matchings (in undirected communication graphs), also for finding the transitive closure of directed graphs and for finding all edges of small cost. We demonstrate that our mediated protocols are stronger than the classical population protocols, by presenting a mediated protocol that stably computes the product of two positive integers, when G is the complete graph. This is not a semilinear predicate. To show this fact, we state and prove a general Theorem about the Composition of two stably computing mediated population protocols. We also show that all predicates stably computable in our model are (non-uniformly) in the class NSPACE(m), where m is the number of edges of the communication graph. We also define Randomized MPP and show that, any Peano predicate accepted by a MPP, can be verified in deterministic Polynomial Time.
Abstract: The promises inherent in users coming together to form data
sharing network communities, bring to the foreground new problems formulated
over such dynamic, ever growing, computing, storage, and networking
infrastructures. A key open challenge is to harness these highly
distributed resources toward the development of an ultra scalable, efficient
search engine. From a technical viewpoint, any acceptable solution
must fully exploit all available resources dictating the removal of any
centralized points of control, which can also readily lead to performance
bottlenecks and reliability/availability problems. Equally importantly,
however, a highly distributed solution can also facilitate pluralism in informing
users about internet content, which is crucial in order to preclude
the formation of information-resource monopolies and the biased visibility
of content from economically-powerful sources. To meet these challenges,
the work described here puts forward MINERVA{\^a}{\"i}ż˝{\"i}ż˝, a novel search
engine architecture, designed for scalability and efficiency. MINERVA{\^a}{\"i}ż˝{\"i}ż˝
encompasses a suite of novel algorithms, including algorithms for creating
data networks of interest, placing data on network nodes, load balancing,
top-k algorithms for retrieving data at query time, and replication algorithms
for expediting top-k query processing. We have implemented the
proposed architecture and we report on our extensive experiments with
real-world, web-crawled, and synthetic data and queries, showcasing the
scalability and efficiency traits of MINERVA{\^a}{\"i}ż˝{\"i}ż˝.
Abstract: We present here, Fun in Numbers, a framework for developing multiplayer pervasive games that rely on the use of ad hoc mobile sensor networks. The unique feature in such games is that players interact with each other and their surrounding environment by using movement and presence as a means of performing game-related actions, utilizing sensor devices. We present the fundamental issues and challenges related to these type of games and the scenarios associated with them is provided. Our framework is developed using Java and is based on a multilayer architecture, which provides developers with a set of templates and services for building and operating new games. Our framework handles a number of challenging fundamental and practical issues, such as synchronization, network congestion, delay-tolerant communication and neighbors discovery. We also present our platform and identify issues that arise in pervasive games which utilize sensor network nodes. The implemented games show how to use non-conventional user interface methods to breathe new life into familiar concepts, like the multiplayer games played out in open space.
Abstract: In this work, we propose an obstacle model to be used while simulating wireless sensor networks. To the best of our knowledge, this is the first time such an integrated and systematic obstacle model appears. We define several types of obstacles that can be found inside the deployment area of a wireless sensor network and provide a categorization of these obstacles, based on their nature (physical and communication obstacles), their shape, as well as
their nature to change over time. In light of this obstacle model we conduct extensive simulations in order to study the effects of obstacles on the performance of representative data propagation protocols for wireless sensor networks. Our findings
show that obstacle presence has a significant impact on protocol performance. Also, we demonstrate the effect of each obstacle type on different protocols, thus providing the network designer with advice on which protocol is best to use.
Abstract: Recent rapid developments in micro-electro-mechanical systems
(MEMS), wireless communications and digital electronics have already
led to the development of tiny, low-power, low-cost sensor devices.
Such devices integrate sensing, limited data processing and restricted
communication capabilities.
Each sensor device individually might have small utility, however the
effective distributed co-ordination of large numbers of such devices can
lead to the efficient accomplishment of large sensing tasks. Large numbers
of sensors can be deployed in areas of interest (such as inaccessible
terrains or disaster places) and use self-organization and collaborative
methods to form an ad-hoc network.
We note however that the efficient and robust realization of such large,
highly-dynamic, complex, non-conventional networking environments is
a challenging technological and algorithmic task, because of the unique
characteristics and severe limitations of these devices.
This talk will present and discuss several important aspects of the
design, deployment and operation of sensor networks. In particular, we
provide a brief description of the technical specifications of state-of-theart
sensor, a discussion of possible models used to abstract such networks,
a discussion of some key algorithmic design techniques (like randomization,
adaptation and hybrid schemes), a presentation of representative
protocols for sensor networks, for important problems including data
propagation, collision avoidance and energy balance and an evaluation
of crucial performance properties (correctness, efficiency, fault-tolerance)
of these protocols, both with analytic and simulation means.
Abstract: In this paper we propose an energy-aware broadcast algorithm for wireless networks. Our algorithm is based on the multicost approach and selects the set of nodes that by transmitting implement broadcasting in an optimally energy-efficient way. The energy-related parameters taken into account are the node transmission power and the node residual energy. The algorithm{\^a}€™s complexity however is non-polynomial, and therefore, we propose a relaxation producing a near-optimal solution in polynomial time. We also consider a distributed information exchange scheme that can be coupled with the proposed algorithms and examine the overhead introduced by this integration. Using simulations we show that the proposed algorithms outperform other solutions in the literature in terms of energy efficiency. Moreover, it is shown that the near-optimal algorithm obtains most of the performance benefits of the optimal algorithm at a smaller computational overhead.
Abstract: A key problem in Grid networks is how to efficiently manage the available infrastructure, in order to
satisfy user requirements and maximize resource utilization. This is in large part influenced by the
algorithms responsible for the routing of data and the scheduling of tasks. In this paper,wepresent several
multi-cost algorithms for the joint scheduling of the communication and computation resources that
will be used by a Grid task. We propose a multi-cost scheme of polynomial complexity that performs
immediate reservations and selects the computation resource to execute the task and determines the
path to route the input data. Furthermore, we introduce multi-cost algorithms that perform advance
reservations and thus also find the starting times for the data transmission and the task execution. We
initially present an optimal scheme of non-polynomial complexity and by appropriately pruning the set
of candidate paths we also give a heuristic algorithm of polynomial complexity. Our performance results
indicate that in a Grid network in which tasks are either CPU- or data-intensive (or both), it is beneficial
for the scheduling algorithm to jointly consider the computational and communication problems. A
comparison between immediate and advance reservation schemes shows the trade-offs with respect to
task blocking probability, end-to-end delay and the complexity of the algorithms.
Abstract: We propose a class of novel energy-efficient multi-cost routing algorithms for wireless mesh networks, and evaluate their performance. In multi-cost routing, a vector of cost parameters is assigned to each network link, from which the cost vectors of candidate paths are calculated using appropriate operators. In the end these parameters are combined in various optimization functions, corresponding to different routing algorithms, for selecting the optimal path. We evaluate the performance of the proposed energy-aware multi-cost routing algorithms under two models. In the network evacuation model, the network starts with a number of packets that have to be transmitted and an amount of energy per node, and the objective is to serve the packets in the smallest number of steps, or serve as many packets as possible before the energy is depleted. In the dynamic one-to-one communication model, new data packets are generated continuously and nodes are capable of recharging their energy periodically, over an infinite time horizon, and we are interested in the maximum achievable steady-state throughput, the packet delay, and the energy consumption. Our results show that energy-aware multi-cost routing increases the lifetime of the network and achieves better overall network performance than other approaches.
Abstract: In this work we study the combination of
multicost routing and adjustable transmission power
in wireless ad-hoc networks, so as to obtain dynamic
energy and interference-efficient routes to optimize network performance. In multi-cost routing, a vector of
cost parameters is assigned to each network link, from
which the cost vectors of candidate paths are calcu-
lated. Only at the end are these parameters combined in
various optimization functions, corresponding to different routing algorithms, for selecting the optimal path.
The multi-cost routing problem is a generalization of
the multi-constrained problem, where no constraints exist, and is also significantly more powerful than single-
cost routing. Since energy is an important limitation of
wireless communications, the cost parameters consid
ered are the number of hops, the interference caused,
the residual energy and the transmission power of the
nodes on the path; other parameters could also be in
cluded, as desired.We assume that nodes can use power
control to adjust their transmission power to the desired
level. The experiments conducted show that the com
bination of multi-cost routing and adjustable transmis sion power can lead to reduced interference and energy
consumption, improving network performance and life-
time.
Abstract: In this work we study the dynamic one-to-one communica-
tion problem in energy- and capacity-constrained wireless ad-hoc net-
works. The performance of such networks is evaluated under random
traffic generation and continuous energy recharging at the nodes over an
infinite-time horizon.We are interested in the maximum throughput that
can be sustained by the network with the node queues being finite and in
the average packet delay for a given throughput. We propose a multicost
energy-aware routing algorithm and compare its performance to that of
minimum-hop routing. The results of our experiments show that gener-
ally the energy-aware algorithm achieves a higher maximum throughput
than the minimum-hop algorithm. More specifically, when the network
is mainly energy-constrained and for the 2-dimensional topology consid-
ered, the throughput of the proposed energy-aware routing algorithm is
found to be almost twice that of the minimum-hop algorithm.
Abstract: We provide an improved FPTAS for multiobjective shortest paths—a fundamental (NP-hard) problem in multiobjective optimization—along with a new generic method for obtaining FPTAS to any multiobjective optimization problem with non-linear objectives. We show how these results can be used to obtain better approximate solutions to three related problems, multiobjective constrained [optimal] path and non-additive shortest path, that have important applications in QoS routing and in traffic optimization. We also show how to obtain a FPTAS to a natural generalization of the weighted multicommodity flow problem with elastic demands and values that models several realistic scenarios in transportation and communication networks.
Abstract: In this work, we study the fundamental naming and counting problems (and some variations) in networks that are anonymous, unknown, and possibly dynamic. In counting, nodes must determine the size of the network n and in naming they must end up with unique identities. By anonymous we mean that all nodes begin from identical states
apart possibly from a unique leader node and by unknown that nodes
have no a priori knowledge of the network (apart from some minimal
knowledge when necessary) including ignorance of n. Network dynamicity is modeled by the 1-interval connectivity model [KLO10], in which communication is synchronous and a (worst-case) adversary chooses the edges of every round subject to the condition that each instance is connected. We first focus on static networks with broadcast where we prove that, without a leader, counting is impossible to solve and that naming is impossible to solve even with a leader and even if nodes know n. These impossibilities carry over to dynamic networks as well. We also show that a unique leader suffices in order to solve counting in linear time.
Then we focus on dynamic networks with broadcast. We conjecture that
dynamicity renders nontrivial computation impossible. In view of this,
we let the nodes know an upper bound on the maximum degree that will
ever appear and show that in this case the nodes can obtain an upper
bound on n. Finally, we replace broadcast with one-to-each, in which a
node may send a different message to each of its neighbors. Interestingly,
this natural variation is proved to be computationally equivalent to a
full-knowledge model, in which unique names exist and the size of the
network is known.
Abstract: We present the NanoPeers architecture paradigm, a
peer-to-peer network of lightweight devices, lacking all or
most of the capabilities of their computer-world counterparts.
We identify the problems arising when we apply current
routing and searching methods to this nano-world, and
present some initial solutions, using a case study of a sensor
network instance; Smart Dust. Furthermore, we propose
the P2P Worlds framework as a hybrid P2P architecture
paradigm, consisting of cooperating layers of P2P
networks, populated by computing entities with escalating
capabilities. Our position is that (i) experience gained
through research and experimentation in the field of P2P
computing, can be indispensable when moving down the
stair of computing capabilities, and that (ii) the proposed
framework can be the basis of numerous real-world applications,
opening up several challenging research problems.
Abstract: Evolutionary dynamics have been traditionally studied in the context of homogeneous populations, mainly described by the Moran process [15]. Recently, this approach has been generalized in [13] by arranging individuals on the nodes of a network (in general, directed). In this setting, the existence of directed arcs enables the simulation of extreme phenomena, where the fixation probability of a randomly placed mutant (i.e. the probability that the offsprings of the mutant eventually spread over the whole population) is arbitrarily small or large. On the other hand, undirected networks (i.e. undirected graphs) seem to have a smoother behavior, and thus it is more challenging to find suppressors/amplifiers of selection, that is, graphs with smaller/greater fixation probability than the complete graph (i.e. the homogeneous population). In this paper we focus on undirected graphs. We present the first class of undirected graphs which act as suppressors of selection, by achieving a fixation probability that is at most one half of that of the complete graph, as the number of vertices increases. Moreover, we provide some generic upper and lower bounds for the fixation
probability of general undirected graphs. As our main contribution, we introduce the natural alternative of the model proposed in [13]. In our new evolutionary model, all individuals interact simultaneously and the result is a compromise between aggressive and non-aggressive individuals. That is, the behavior of the individuals in our new model and in the model of [13] can be interpreted as an “aggregation” vs. an “all-or-nothing” strategy, respectively. We prove that our new model of mutual influences admits a potential function, which guarantees the convergence of the system for any graph topology and any initial fitness vector of the individuals. Furthermore, we prove fast convergence to the stable state for the case of the complete graph, as well as we provide almost tight bounds on the limit fitness of the individuals. Apart from being important on its own, this new evolutionary model appears to be useful also in the abstract modeling of control mechanisms over invading populations in networks. We demonstrate this by introducing and analyzing two alternative control approaches, for which we bound the time needed to stabilize to the “healthy” state of the system.
Abstract: Geographic routing scales well in sensor networks, mainly
due to its stateless nature. Still, most of the algorithms are
concerned with finding some path, while the optimality of
the path is difficult to achieve. In this paper we are presenting
a novel geographic routing algorithm with obstacle
avoidance properties. It aims at finding the optimal path
from a source to a destination when some areas of the network
are unavailable for routing due to low local density or
obstacle presence. It locally and gradually with time (but,
as we show, quite fast) evaluates and updates the suitability
of the previously used paths and ignores non optimal paths
for further routing. By means of extensive simulations, we
are comparing its performance to existing state of the art
protocols, showing that it performs much better in terms of
path length thus minimizing latency, space, overall traffic
and energy consumption.
Abstract: We study network load games, a class of routing games in
networks which generalize sel{\^A}Żsh routing games on networks consisting
of parallel links. In these games, each user aims to route some tra{\^A}±c from
a source to a destination so that the maximum load she experiences in the
links of the network she occupies is minimum given the routing decisions
of other users. We present results related to the existence, complexity,
and price of anarchy of Pure Nash Equilibria for several network load
games. As corollaries, we present interesting new statements related to
the complexity of computing equilibria for sel{\^A}Żsh routing games in net-
works of restricted parallel links.
Abstract: We propose new burst assembly schemes and fast reservation (FR) protocols for Optical Burst Switched (OBS) networks that are based on traffic prediction. The burst assembly schemes aim at minimizing (for a given burst size) the average delay of the packets incurred during the burst assembly process, while the fast reservation protocols aim at further reducing the end-to-end delay of the data bursts. The burst assembly techniques use a linear prediction filter to estimate the number of packet arrivals at the ingress node in the following interval, and launch a new burst into the network when a certain criterion, different for each proposed scheme, is met. The fast reservation protocols use prediction filters to estimate the expected length of the burst and the time needed for the burst assembly process to complete. A Burst Header Packet (BHP) packet carrying these estimates is sent before the burst is completed, in order to reserve bandwidth at intermediate nodes for the time interval the burst is expected to pass from these nodes. Reducing the packet aggregation delay and the time required to perform the reservations, reduces the total time needed for a packet to be transported over an OBS network and is especially important for real-time applications. We evaluate the performance of the proposed burst assembly schemes and show that a number of them outperform the previously proposed timer-based, length-based and average delay-based burst assembly schemes. We also look at the performance of the fast reservation (FR) protocols in terms of the probability of successfully establishing the reservations required to transport the burst.
Abstract: We propose new burst assembly techniques that aim at reducing the average delay experienced by the packets during the burstification process in optical burst switched (OBS) networks, for a given average size of the bursts produced. These techniques use a linear prediction filter to estimate the number of packet arrivals at the ingress node in the following interval, and launch a new burst into the network when a certain criterion, which is different for each proposed scheme, is met. Reducing the packet burstification delay, for a given average burst size, is essential for real-time applications; correspondingly, increasing the average burst size for a given packet burstification delay is important for reducing the number of bursts injected into the network and the associated overhead imposed on the core nodes. We evaluate the performance of the proposed schemes and show that two of them outperform the previously proposed timer - based, length - based and average delay-based burst aggregation schemes in terms of the average packet burstification delay for a given average burst size.
Abstract: We address the call control problem in wireless cellular networks that utilize Frequency Division Multiplexing (FDM) technology. In such networks, many users within the same geographical region (cell) can communicate simultaneously with other users of the network using distinct frequencies. The available frequency spectrum is limited; hence, its management should be done efficiently. The objective of the call control problem is, given a spectrum of available frequencies and users that wish to communicate in a cellular network, to maximize the number of users that communicate without signal interference. We study the online version of the problem in cellular networks using competitive analysis and present new upper and lower bounds.
Abstract: Wireless sensor networks are about to be part of everyday life. Homes and workplaces capable of self-controlling and adapting air-conditioning for different temperature and humidity levels, sleepless forests ready to detect and react in case of a fire, vehicles able to avoid sudden obstacles or possibly able to self-organize routes to avoid congestion, and so on, will probably be commonplace in the very near future. Mobility plays a central role in such systems and so does passive mobility, that is, mobility of the network stemming from the environment itself. The population protocol model was an intellectual invention aiming to describe such systems in a minimalistic and analysis-friendly way. Having as a starting-point the inherent limitations but also the fundamental establishments of the population protocol model, we try in this monograph to present some realistic and practical enhancements that give birth to some new and surprisingly powerful (for this kind of systems) computational models.
Abstract: Wireless Sensor Networks (WSNs) constitute a recent and promising new
technology that is widely applicable. Due to the applicability of this
technology and its obvious importance for the modern distributed
computational world, the formal scientific foundation of its inherent laws
becomes essential. As a result, many new computational models for WSNs
have been proposed. Population Protocols (PPs) are a special category of
such systems. These are mainly identified by three distinctive
characteristics: the sensor nodes (agents) move passively, that is, they
cannot control the underlying mobility pattern, the available memory to
each agent is restricted, and the agents interact in pairs. It has been
proven that a predicate is computable by the PP model iff it is
semilinear. The class of semilinear predicates is a fairly small class. In
this work, our basic goal is to enhance the PP model in order to improve
the computational power. We first make the assumption that not only the
nodes but also the edges of the communication graph can store restricted
states. In a complete graph of n nodes it is like having added O(n2)
additional memory cells which are only read and written by the endpoints
of the corresponding edge. We prove that the new model, called Mediated
Population Protocol model, can operate as a distributed nondeterministic
Turing machine (TM) that uses all the available memory. The only
difference from a usual TM is that this one computes only symmetric
languages. More formally, we establish that a predicate is computable by
the new model iff it is symmetric and belongs to NSPACE(n2). Moreover, we
study the ability of the new model to decide graph languages (for general
graphs). The next step is to ignore the states of the edges and provide
another enhancement straight away from the PP model. The assumption now is
that the agents are multitape TMs equipped with infinite memory, that can
perform internal computation and interact with other agents, and we define
space-bounded computations. We call this the Passively mobile Machines
model. We prove that if each agent uses at most f(n) memory for f(n)={\`U}(log
n) then a predicate is computable iff it is symmetric and belongs to
NSPACE(nf(n)). We also show that this is not the case for f(n)=o(log n).
Based on these, we show that for f(n)={\`U}(log n) there exists a space
hierarchy like the one for classical symmetric TMs. We also show that the
latter is not the case for f(n)=o(loglog n), since here the corresponding
class collapses in the class of semilinear predicates and finally that for
f(n)={\`U}(loglog n) the class becomes a proper superset of semilinear
predicates. We leave open the problem of characterizing the classes for
f(n)={\`U}(loglog n) and f(n)=o(log n).
Abstract: Motivated by the problem of efficiently collecting data from
wireless sensor networks via a mobile sink, we present an accelerated
random walk on Random Geometric Graphs. Random
walks in wireless sensor networks can serve as fully local,
very simple strategies for sink motion that significantly
reduce energy dissipation but introduce higher latency in the
data collection process. While in most cases random walks
are studied on graphs like Gn,p and Grid, we define and experimentally
evaluate our newly proposed random walk on
the Random Geometric Graphs model, that more accurately
abstracts spatial proximity in a wireless sensor network. We
call this new random walk the \~{a}-stretched random walk, and
compare it to two known random walks; its basic idea is
to favour visiting distant neighbours of the current node
towards reducing node overlap. We also define a new performance
metric called Proximity Cover Time which, along
with other metrics such as visit overlap statistics and proximity
variation, we use to evaluate the performance properties
and features of the various walks.
Abstract: Web services are becoming an important enabler of the Semantic Web. Besides the need for a rich description mechanism, Web Service information should be made available in an accessible way for machine processing. In this paper, we propose a new P2P basedapproach for Web Services discovery. Peers that store Web Services information, such as data item descriptions, are efficiently located using a scalable and robust data indexing structure for Peer-to-Peer data networks, NIPPERS. We present a theoretical analysis which shows that the communication cost of the query and update operations scale double-logarithmically with the number of NIPPERS nodes. Furthermore, we show that the network is robust with respect to failures fulfilling quality of web services requirements.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. The Radiocoloring (RC) of a graph G(V,E) is an assignment function Φ: V → IN such that ¦Φ(u)-Φ(v)≥ 2, when u; v are neighbors in G, and ¦Φ(u)-Φ(v)≥1 when the minimum distance of u; v in G is two. The discrete number and the range of frequencies used are called order and span, respectively. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span or the order. In this paper we prove that the min span RCP is NP-complete for planar graphs. Next, we provide an O(nΔ) time algorithm (¦V¦ = n) which obtains a radiocoloring of a planar graph G that approximates the minimum order within a ratio which tends to 2 (where Δ the maximum degree of G). Finally, we provide a fully polynomial randomized approximation scheme (fpras) for the number of valid radiocolorings of a planar graph G with λ colors, in the case λ ≥ 4λ + 50.
Abstract: We consider the offline version of the routing and
wavelength assignment (RWA) problem in transparent all-optical
networks. In such networks and in the absence of regenerators,
the signal quality of transmission degrades due to physical layer
impairments. Because of certain physical effects, routing choices
made for one lightpath affect and are affected by the choices made
for the other lightpaths. This interference among the lightpaths
is particularly difficult to formulate in an offline algorithm since,
in this version of the problem, we start without any established
connections and the utilization of lightpaths are the variables of
the problem.We initially present an algorithm for solving the pure
(without impairments) RWA problem based on a LP-relaxation
formulation that tends to yield integer solutions. Then, we extend
this algorithm and present two impairment-aware (IA) RWA algorithms
that account for the interference among lightpaths in their
formulation. The first algorithm takes the physical layer indirectly
into account by limiting the impairment-generating sources. The
second algorithm uses noise variance-related parameters to directly
account for the most important physical impairments. The
objective of the resulting cross-layer optimization problem is not
only to serve the connections using a small number of wavelengths
(network layer objective), but also to select lightpaths that have
acceptable quality of transmission (physical layer objective).
Simulations experiments using realistic network, physical layer,
and traffic parameters indicate that the proposed algorithms can
solve real problems within acceptable time.
Abstract: The concept of trust plays an important role in the operation and public acceptance of today's computing environment. Although it is a difficult concept to formalize and handle, many efforts have been made towards a clear definition of trust and the development of systematic ways for trust management. Our central viewpoint is that trust cannot be defined, anymore, as consisting of a static set of rules that define systems properties that hold eternally due to the highly dynamic nature of today's computing systems (e.g. wireless networks, ad-hoc networks, virtual communities and digital territories etc.). Our approach is an effort to define trust in terms of properties that hold with some limiting probability as the the system grows and try to establish conditions that ensure that ??good?? properties hold almost certainly. Based on this viewpoint, in this paper we provide a new framework for defining trust through formally definable properties that hold, almost certainly, in the limit in randomly growing combinatorial structures that model ??boundless?? computing systems (e.g. ad-hoc networks), drawing on results that establish the threshold behavior of predicates written in the first and second order logic. We will also see that, interestingly, some trust models have properties that do not have limiting probabilities. This fact can be used to demonstrate that as certain trust networks grow indefinitely, their trust properties are not certain to be present
Abstract: An ad-hoc mobile network is a collection of mobile hosts, with
wireless communication capabilities, forming a temporary network
without the aid of any established fixed infrastructure.
In such networks, topological connectivity is subject to frequent,
unpredictable change. Our work focuses on networks with high
rate of such changes to connectivity. For such dynamic changing
networks we propose protocols which exploit the co-ordinated
(by the protocol) motion of a small part of the network.
We show that such protocols can be designed to work
correctly and efficiently even in the case of arbitrary (but not
malicious) movements of the hosts not affected by the protocol.
We also propose a methodology for the analysis of the expected
behaviour of protocols for such networks, based on the assumption that mobile hosts (whose motion is not guided by
the protocol) conduct concurrent random walks in their
motion space.
Our work examines some fundamental problems such as pair-wise
communication, election of a leader and counting, and proposes
distributed algorithms for each of them. We provide their
proofs of correctness, and also give rigorous analysis by
combinatorial tools and also via experiments.
Abstract: In this paper we demonstrate the significant impact of (a) the mobility rate and (b) the user density on the performance of routing protocols in ad-hoc mobile networks. In particular, we study the effect of these parameters on two different approaches for designing routing protocols: (a) the route creation and maintenance approach and (b) the "support" approach, that forces few hosts to move acting as "helpers" for message delivery. We study one representative protocol for each approach, i.e., AODV for the first approach and RUNNERS for the second. We have implemented the two protocols and performed a large scale and detailed simulation study of their performance. For the first time, we study AODV (and RUNNERS) in the 3D case. The main findings are: the AODV protocol behaves well in networks of high user density and low mobility rate, while its performance drops for sparse networks of highly mobile users. On the other hand, the RUNNERS protocol seems to tolerate well (and in fact benefit from) high mobility rates and low densities. Thus, we are able to partially answer an important conjecture of [Chatzigiannakis, I et al. 2003].
Abstract: We investigate the problem of how to achieve energy balanced data propagation in distributed wireless sensor networks. The energy balance property guarantees that the average per sensor energy dissipation is the same for all sensors in the network, throughout the execution of the data propagation protocol. This property is crucial for prolonging the network lifetime, by avoiding early energy depletion of sensors.
We survey representative solutions from the state of the art. We first present a basic algorithm that in each step probabilistically decides whether to propagate data one-hop towards the final destination (the sink), or to send it directly to the sink. This randomized choice trades-off the (cheap, but slow) one-hop transmissions with the direct transmissions to the sink, which are more expensive but bypass the bottleneck region around the sink and propagate data fast. By a detailed analysis using properties of stochastic processes and recurrence relations we precisely estimate (even in closed form) the probability for each propagation option necessary for energy balance.
The fact (shown by our analysis) that direct (expensive) transmissions to the sink are needed only rarely, shows that our protocol, besides energy balanced, is also energy efficient. We then enhance this basic result by surveying some recent findings including a generalized algorithm and demonstrating the optimality of this two-way probabilistic data propagation, as well as providing formal proofs of the energy optimality of the energy balance property.
Abstract: In future transparent optical networks, it is
important to consider the impact of physical impairments in the
routing and wavelengths assignment process, to achieve efficient
connection provisioning. In this paper, we use classical multi-
objective optimization (MOO) strategies and particularly genetic
algorithms to jointly solve the impairment aware RWA (IA-
RWA) problem. Fiber impairments are indirectly considered
through the insertion of the path length and the number of
common hops in the optimization process. It is shown that
blocking is greatly improved, while the obtained solutions truly
converge towards the Pareto front that constitutes the set of
global optimum solutions. We have evaluated our findings, using
an Q estimator tool, that calculates the signal quality of each path
analytically.
Index Terms RWA, Genetic Algorithm, All-Optical
Networks, Multi Objective Optimization.
Abstract: This article studies the transmission control
protocol (TCP) synchronization effect in optical burst
switched networks.Synchronization of TCP flows appears
when optical bursts with segments from different flows inside
are dropped in the network causing flow congestion windows decreasing simultaneously. In this article,this imminent
effect is studied with different assembly schemes and network scenarios.Different metrics are applied to quantitatively assess synchronization with classical assembly
schemes.A new burst assembly scheme is proposed that
statically or dynamically allocates flows to multiple assembly queues to control flow aggregation within the assembly
cycle.The effectiveness of the scheme has been evaluated,
showing a good improvement in optical link utilization
Abstract: We propose and evaluate an impairment-aware multi-parametric routing and wavelength assignment algorithm for online traffic in transparent optical networks. In such networks the signal quality of transmission degrades due to physical layer impairments. In the multiparametric approach, a vector of cost parameters is assigned to each link, from which the cost vectors of candidate lightpaths are calculated. In the proposed scheme the cost vector includes impairment generating source parameters, such as the path length, the number of hops, the number of crosstalk sources and other inter-lightpath interfering parameters, so as to indirectly account for the physical layer effects. For a requested connection the algorithm calculates a set of candidate lightpaths, whose quality of transmission is validated using a function that combines the impairment generating parameters. For selecting the lightpath we propose and evaluate various optimization functions that correspond to different IA-RWA algorithms. Our performance results indicate that the proposed algorithms utilize efficiently the available resources and minimize the total accumulated signal degradation on the selected lightpaths, while having low execution times.
Abstract: The authors demonstrate an optical buffer architecture which is implemented using quantum dot semiconductor optical amplifiers (QD-SOAs) in order to achieve wavelength conversion with regenerative capabilities, for all optical packet switched networks. The architecture consists of cascaded programmable delay stages that minimise the number of wavelength converters required to implement the buffer. Physical layer simulations have been performed in order to reveal the potential of this scheme as well as the operating and device parameters of QD-SOA-based wavelength converters. The results obtained have indicated that, up to three time-slot interchanger (TSI) cascaded stages show good performance at 160 Gb/s in the 1550 nm communication window.
Abstract: Switching in core optical networks is currently being
performed using high-speed electronic or all-optical
circuit switches. Switching with high-speed electronics
requires optical-to-electronic (O/E) conversion of the
data stream, making the switch a potential bottleneck
of the network: any effort (including parallelization) for
electronics to approach the optical speeds seems to be
already reaching its practical limits. Furthermore, the
store-and-forward approach of packet-switching does
not seem suitable for all-optical implementation due to
the lack of practical optical random-access-memories
to buffer and resolve contentions. Circuit switching on
the other hand, involves a pre-transmission delay for
call setup and requires the aggregation of microlows
into circuits, sacriicing the granularity and the control
over individual lows, and is ineficient for bursty traf-
ic. Optical burst switching (OBS) has been proposed
by Qiao and Yoo (1999) to combine the advantages of
both packet and circuit switching and is considered a
promising technology for the next generation optical
internet.
Abstract: The objective of this research is to propose two new optical procedures for packet routing and forwarding in the framework of transparent optical networks. The single-wavelength label-recognition and packet-forwarding unit, which represents the central physical constituent of the switching node, is fully described in both cases. The first architecture is a hybrid opto-electronic structure relying on an optical serial-to-parallel converter designed to slow down the label processing. The remaining switching operations are done electronically. The routing system remains transparent for the packet payloads. The second architecture is an all-optical architecture and is based on the implementation of all-optical decoding of the parallelized label. The packet-forwarding operations are done optically. The major subsystems required in both of the proposed architectures are described on the basis of nonlinear effects in semiconductor optical amplifiers. The experimental results are compatible with the integration of the whole architecture. Those subsystems are a 4-bit time-to-wavelength converter, a pulse extraction circuit, a an optical wavelength generator, a 3 x 8 all-optical decoder and a packet envelope detector.
Abstract: In this paper we propose an energy-efficient broadcast algorithm for wireless networks for the case where the transmission powers of the nodes are fixed. Our algorithm is based on the multicost approach and selects an optimal energy-efficient set of nodes for broadcasting, taking into account: i) the node residual energies, ii) the transmission powers used by the nodes, and iii) the set of nodes that are covered by a specific schedule. Our algorithm is optimal, in the sense that it can optimize any desired function of the total power consumed by the broadcasting task and the minimum of the current residual energies of the nodes, provided that the optimization function is monotonic in each of these parameters. Our algorithm has non-polynomial complexity, thus, we propose a relaxation producing a near-optimal solution in polynomial time. Using simulations we show that the proposed algorithms outperform other established solutions for energy-aware broadcasting with respect to both energy consumption and network lifetime. Moreover, it is shown that the near-optimal multicost algorithm obtains most of the performance benefits of the optimal multicost algorithm at a smaller computational overhead.
Abstract: This paper studies the data gathering problem in wireless networks, where data generated at the nodes has to be collected at a single sink. We investigate the relationship between routing optimality and fair resource management. In particular, we prove that for energy balanced data propagation, Pareto optimal routing and flow maximization are equivalent, and also prove that flow maximization is equivalent to maximizing the network lifetime. We algebraically characterize the network structures in which energy balanced data flows are maximal. Moreover, we algebraically characterize communication links which are not used by an optimal flow. This leads to the characterization of minimal network structures supporting the maximal flows.
We note that energy balance, although implying global optimality, is a local property that can be computed efficiently and in a distributed manner. We suggest online distributed algorithms for energy balance in different optimal network structures and numerically show their stability in particular setting. We remark that although the results obtained in this paper have a direct consequence in energy saving for wireless networks they do not limit themselves to this type of networks neither to energy as a resource. As a matter of fact, the results are much more general and can be used for any type of network and different type of resources.
Abstract: This paper studies the data gathering problem in wireless networks, where data generated at the nodes has to be collected at a single sink. We investigate the relationship between routing optimality and fair resource management. In particular, we prove that for energy-balanced data propagation, Pareto optimal routing and flow maximization are equivalent, and also prove that flow maximization is equivalent to maximizing the network lifetime. We algebraically characterize the network structures in which energy-balanced data flows are maximal. Moreover, we algebraically characterize communication links which are not used by an optimal flow. This leads to the characterization of minimal network structures supporting the maximal flows.
We note that energy-balance, although implying global optimality, is a local property that can be computed efficiently and in a distributed manner. We suggest online distributed algorithms for energy-balance in different optimal network structures and numerically show their stability in particular setting. We remark that although the results obtained in this paper have a direct consequence in energy saving for wireless networks they do not limit themselves to this type of networks neither to energy as a resource. As a matter of fact, the results are much more general and can be used for any type of network and different types of resources.
Abstract: The proliferation of peertopeer
(P2P) systems has come with various
compelling applications including file sharing based on distributed
hash tables (DHTs) or other kinds of overlay networks.
Searching the content of files (especially Web Search) requires
multikeyword
querying with scoring and ranking. Existing approaches
have no way of taking into account the correlation between
the keywords in the query. This paper presents our solution
that incorporates the queries and behavior of the users in the P2P
network such that interesting correlations can be inferred.
Abstract: In this paper, we demonstrate optical transparency
in packet formatting and network traffic offered by all-optical
switching devices. Exploiting the bitwise processing capabilities
of these “optical transistors,” simple optical circuits are designed
verifying the independency to packet length, synchronization
and packet-to-packet power fluctuations. Devices with these attributes
are key elements for achieving network flexibility, fine
granularity and efficient bandwidth-on-demand use. To this end, a
header/payload separation circuit operating with IP-like packets,
a clock and data recovery circuit handling asynchronous packets
and a burst-mode receiver for bursty traffic are presented. These
network subsystems can find application in future high capacity
data-centric photonic packet switched networks.
Abstract: We propose a new theoretical model for passively mobile Wireless Sensor Networks. We
call it the PALOMA model, standing for PAssively mobile LOgarithmic space MAchines. The main
modification w.r.t. the Population Protocol model [2] is that agents now, instead of being automata, are
Turing Machines whose memory is logarithmic in the population size n. Note that the new model is still
easily implementable with current technology. We focus on complete communication graphs. We define
the complexity class PLM, consisting of all symmetric predicates on input assignments that are stably
computable by the PALOMA model. We assume that the agents are initially identical. Surprisingly, it
turns out that the PALOMA model can assign unique consecutive ids to the agents and inform them
of the population size! This allows us to give a direct simulation of a Deterministic Turing Machine
of O(n log n) space, thus, establishing that any symmetric predicate in SPACE(n log n) also belongs
to PLM. We next prove that the PALOMA model can simulate the Community Protocol model [15],
thus, improving the previous lower bound to all symmetric predicates in NSPACE(n log n). Going
one step further, we generalize the simulation of the deterministic TM to prove that the PALOMA
model can simulate a Nondeterministic TM of O(n log n) space. Although providing the same lower
bound, the important remark here is that the bound is now obtained in a direct manner, in the sense
that it does not depend on the simulation of a TM by a Pointer Machine. Finally, by showing that a
Nondeterministic TM of O(n log n) space decides any language stably computable by the PALOMA
model, we end up with an exact characterization for PLM: it is precisely the class of all symmetric
predicates in NSPACE(n log n).
Abstract: We propose a new theoretical model for passively mobile Wireless Sensor Networks, called PM, standing for Passively mobile Machines. The main modification w.r.t. the Population Protocol model [Angluin et al. 2006] is that agents now, instead of being automata, are Turing Machines. We provide general definitions for unbounded memories, but we are mainly interested in computations upper-bounded by plausible space limitations. However, we prove that our results hold for more general cases. We focus on \emph{complete interaction graphs} and define the complexity classes PMSPACE(f(n)) parametrically, consisting of all predicates that are stably computable by some PM protocol that uses O(f(n)) memory in each agent. We provide a protocol that generates unique identifiers from scratch only by using O(log n) memory, and use it to provide an exact characterization of the classes PMSPACE(f(n)) when f(n) = Ω(log n): they are precisely the classes of all symmetric predicates in NSPACE(nf(n)). As a consequence, we obtain a space hierarchy of the PM model when the memory bounds are Ω(log n). We next explore the computability of the PM model when the protocols use o(loglog n) space per machine and prove that SEM = PMSPACE(f(n)) when f(n) = o(loglog n), where SEM denotes the class of the semilinear predicates. Finally, we establish that the minimal space requirement for the computation of non-semilinear predicates is O(log log n).
Abstract: We propose a new theoretical model for passively mobile Wireless Sensor Networks, called PM, standing for Passively mobile Machines. The main modification w.r.t. the Population Protocol model [Angluin et al. 2006] is that the agents now, instead of being automata, are Turing Machines. We provide general definitions for unbounded memories, but we are mainly interested in computations upper-bounded by plausible space limitations. However, we prove that our results hold for more general cases. We focus on complete interaction graphs and define the complexity classes PMSPACE(f(n)) parametrically, consisting of all predicates that are stably computable by some PM protocol that uses O(f(n)) memory in each agent. We provide a protocol that generates unique identifiers from scratch only by using O(log n) memory, and use it to provide an exact characterization of the classes PMSPACE(f(n)) when f(n)=Omega(log n): they are precisely the classes of all symmetric predicates in NSPACE(nf(n)). As a consequence, we obtain a space hierarchy of the PM model when the memory bounds are Omega(log n). Finally, we establish that the minimal space requirement for the computation of non-semilinear predicates is O(log log n).
Abstract: In this work we propose and develop a comprehensive
infrastructure, coined PastryStrings, for supporting rich
queries on both numerical (with range, and comparison
predicates) and string attributes, (accommodating equality,
prefix, suffix, and containment predicates) over DHT networks
utilising prefix-based routing. As event-based, publish/
subscribe information systems are a champion application
class, we formulate our solution in terms of this environment.
Abstract: In this work we propose and develop a comprehensive infrastructure, coined PastryStrings, for supporting rich queries on
both numerical (with range, and comparison predicates) and string attributes, (accommodating equality, prefix, suffix, and
containment predicates) over DHT networks utilising prefix-based routing. As event-based, publish/subscribe information
systems are a champion application class, we formulate our solution in terms of this environment.
Abstract: We consider path protection in the routing and
wavelength assignment (RWA) problem for impairment
constrained WDM optical networks. The proposed multicost
RWA algorithms select the primary and the backup lightpaths by
accounting for physical layer impairments. The backup lightpath
may either be activated (1+1 protection) or it may be reserved and
not activated, with activation taking place when/if needed (1:1
protection). In case of 1:1 protection the period of time where the
quality of its transmission (QoT) is valid, despite the possible
establishment of future connections, should be preserved, so as to
be used in case the primary lightpath fails. We show that, by using
the multicost approach for solving the RWA with protection
problem, great benefits can be achieved both in terms of the
connection blocking rate and in terms of the validity period of the
backup lightpath. Moreover the multicost approach, by providing
a set of candidate lightpaths for each source destination pair,
instead of a single one, offers ease and flexibility in selecting the
primary and the backup lightpaths.
Abstract: In this work, we study the impact of dynamically changing
link capacities on the delay bounds of LIS (Longest-In-
System) and SIS (Shortest-In-System) protocols on specific
networks (that can be modelled as Directed Acyclic Graphs-
DAGs) and stability bounds of greedy contention-resolution
protocols running on arbitrary networks under the Adversarial
Queueing Theory. Especially, we consider the model
of dynamic capacities, where each link capacity may take
on integer values from [1, C] withC > 1, under a (w, \~{n})-
adversary.
Abstract: In this work, we study the impact of dynamically changing link capacities on the delay bounds of LIS (Longest-In-System) and SIS (Shortest-In-System) protocols on specific networks (that can be modelled as Directed Acyclic Graphs (DAGs)) and stability bounds of greedy contention–resolution protocols running on arbitrary networks under the Adversarial Queueing Theory. Especially, we consider the model of dynamic capacities, where each link capacity may take on integer values from [1,C] with C>1, under a (w,\~{n})-adversary. We show that the packet delay on DAGs for LIS is upper bounded by O(iw\~{n}C) and lower bounded by {\`U}(iw\~{n}C) where i is the level of a node in a DAG (the length of the longest path leading to node v when nodes are ordered by the topological order induced by the graph). In a similar way, we show that the performance of SIS on DAGs is lower bounded by {\`U}(iw\~{n}C), while the existence of a polynomial upper bound for packet delay on DAGs when SIS is used for contention–resolution remains an open problem. We prove that every queueing network running a greedy contention–resolution protocol is stable for a rate not exceeding a particular stability threshold, depending on C and the length of the longest path in the network.
Abstract: We demonstrate the use of impairment constraint routing
for performance engineering of transparent metropolitan area
optical networks. Our results show the relationship between
blocking probability and different network characteristics such
as span length, amplifier noise figure, and hit rate,and provide
information on the system specifications required to achieve
acceptable network performance.
Abstract: We present a detailed performance evaluation of a
hybrid optical switching architecture called Overspill Routing in
Optical Networks (ORION). The ORION architecture combines
wavelength and (electronic) packet switching, so as to obtain the
advantages of both switching paradigms. We have developed an
extensive network simulator where the basic features of the
ORION architecture were modeled, including suitable loadvarying
sources and edge/core node architectures. Various aspects
of the ORION architecture were studied including the routing
policies used (i.e. once ORION always ORION and lightpath reentry)
and the various options available for the buffer
architecture. The complete network study shows that ORION can
absorb temporary traffic overloads, as intended, provided
sufficient buffering is present.
Abstract: In this work we extend the population protocol model of Angluin et al., in
order to model more powerful networks of very small resource limited
artefacts (agents) that is possible to follow some unpredictable passive
movement. These agents communicate in pairs according to the commands of
an adversary scheduler. A directed (or undirected) communication graph
encodes the following information: each edge (u,\~{o}) denotes that during the
computation it is possible for an interaction between u and \~{o} to happen in
which u is the initiator and \~{o} the responder. The new characteristic of
the proposed mediated population protocol model is the existance of a
passive communication provider that we call mediator. The mediator is a
simple database with communication capabilities. Its main purpose is to
maintain the permissible interactions in communication classes, whose
number is constant and independent of the population size. For this reason
we assume that each agent has a unique identifier for whose existence the
agent itself is not informed and thus cannot store it in its working
memory. When two agents are about to interact they send their ids to the
mediator. The mediator searches for that ordered pair in its database and
if it exists in some communication class it sends back to the agents the
state corresponding to that class. If this interaction is not permitted to
the agents, or, in other words, if this specific pair does not exist in
the database, the agents are informed to abord the interaction. Note that
in this manner for the first time we obtain some control on the safety of
the network and moreover the mediator provides us at any time with the
network topology. Equivalently, we can model the mediator by communication
links that are capable of keeping states from a edge state set of constant
cardinality. This alternative way of thinking of the new model has many
advantages concerning the formal modeling and the design of protocols,
since it enables us to abstract away the implementation details of the
mediator. Moreover, we extend further the new model by allowing the edges
to keep readable only costs, whose values also belong to a constant size
set. We then allow the protocol rules for pairwise interactions to modify
the corresponding edge state by also taking into account the costs. Thus,
our protocol descriptions are still independent of the population size and
do not use agent ids, i.e. they preserve scalability, uniformity and
anonymity. The proposed Mediated Population Protocols (MPP) can stably
compute graph properties of the communication graph. We show this for the
properties of maximal matchings (in undirected communication graphs), also
for finding the transitive closure of directed graphs and for finding all
edges of small cost. We demonstrate that our mediated protocols are
stronger than the classical population protocols. First of all we notice
an obvious fact: the classical model is a special case of the new model,
that is, the new model can compute at least the same things with the
classical one. We then present a mediated protocol that stably computes
the product of two nonnegative integers in the case where G is complete
directed and connected. Such kind of predicates are not semilinear and it
has been proven that classical population protocols in complete graphs can
compute precisely the semilinear predicates, thus in this manner we show
that there is at least one predicate that our model computes and which the
classical model cannot compute. To show this fact, we state and prove a
general Theorem about the composition of two mediated population
protocols, where the first one has stabilizing inputs. We also show that
all predicates stably computable in our model are (non-uniformly) in the
class NSPACE(m), where m is the number of edges of the communication
graph. Finally, we define Randomized MPP and show that, any Peano
predicate accepted by a Randomized MPP, can be verified in deterministic
polynomial time.
Abstract: This is a joint work with Ioannis Chatzigiannakis and Othon Michail.
We discuss here the population protocol model and most of its well-known extensions. The population protocol model aims to represent sensor networks consisting of tiny computational devices with sensing capabilities that follow some unpredictable and uncontrollable mobility pattern. It adopts a minimalistic approach and, thus, naturally computes a quite restricted class of predicates and exhibits almost no fault-tolerance. Most recent approaches make extra realistic and implementable assumptions, in order to gain more computational power and/or speed-up the time to convergence and/or improve fault-tolerance. In particular, the mediated population protocol model, the community protocol model, and the PALOMA model, which are all extensions of the population protocol model, are thoroughly discussed. Finally, the inherent difficulty of verifying the correctness of population protocols that run on complete communication graphs is revealed, but a promising algorithmic solution is presented.
Abstract: This Volume contains the 11 papers corresponding to poster and demo presentations
accepted to the 7th ACM/IEEE International Symposium on Modeling,
Analysis and Simulation ofWireless and Mobile Systems (MSWiM 04),
that is held October 4-6, 2004, in Venice, Italy.
MSWiM 2004 (http://www.cs.unibo.it/mswim2004/) is intended to provide
an international forum for original ideas, recent results and achievements on
issues and challenges related to mobile and wireless systems.
A Call for Posters was announced and widely disseminated, soliciting posters
that report on recent original results or on-going research in the area of wireless
and mobile networks. Prospective authors were encouraged to submit interesting
results on all aspects of modeling, analysis and simulation of mobile and
wireless networks and systems. The scope and topics of the Posters Session
were the same as those included in the MSWiM Call for Papers (see above).
Poster presentations were meant to provide authors with early feedback on
their research work and enable them to present their research and exchange
ideas during the Symposium.
All submissions to the call for posters as well as selected papers submitted
to MSWiM 04 were considered and reviewed. The review process resulted in
accepting the set of 11 papers included in this Volume. Accepted posters will
also be on display during the Symposium.
The set of papers in this Proceedings covers a wide range of important topics
in wireless and mobile computing, including channel allocation in wireless
networks, quality of service provisioning in IEEE 802.11 wireless LANs, IP
mobility support, energy conservation, routing in mobile adhoc networks, resource
sharing, wireless access to the WWW, sensor networks etc. The performance
evaluation techniques used include both analysis and simulation.
We hope that the poster papers included in this Volume will facilitate a fruitful
and lively discussion and exchange of interesting and creative ideas during
the Symposium.
We wish to thank the MSWiM Steering Committee Chair Azzedine Boukerche
and the Program Co-Chairs ofMSWiM 04 Carla-Fabiana Chiasserini and
Lorenzo Donatiello for their valuable help in the selection procedure. Also, the
MSWiM 04 Publicity Co-Chairs Luciano Bononi, Helen Karatza and Mirela
Sechi Moretti Annoni Notare for disseminating the Call for Posters.
We wish to warmly thank the Poster Proceedings Chair Ioannis Chatzigiannakis
for carefully doing an excellent job in preparing the Volume you now
hold in your hands.
Abstract: The 8th ACM/IEEE International Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems Symposium (MSWiM 2005) solicits posters that report on recent original results or on-going research in the area of wireless and mobile networks.
Abstract: The 9th ACM/IEEE International Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems Symposium (MSWiM 2006) solicits posters that report on recent original results or on-going research in the area of wireless and mobile networks.
Abstract: We propose, implement and evaluate new energy conservation schemes for efficient data propagation in wireless sensor networks. Our protocols are adaptive, i.e. locally monitor the network conditions and accordingly adjust towards optimal operation choices. This dynamic feature is particularly beneficial in heterogeneous settings and in cases of redeployment of sensor devices in the network area. We implement our protocols and evaluate their performance through a detailed simulation study using our extended version of ns-2. In particular we combine our schemes with known communication paradigms. The simulation findings demonstrate significant gains and good trade-offs in terms of delivery success, delay and energy dissipation.
Abstract: In this paper we study the problem of assigning transmission ranges to the nodes of a multihop
packet radio network so as to minimize the total power consumed under the constraint
that adequate power is provided to the nodes to ensure that the network is strongly connected
(i.e., each node can communicate along some path in the network to every other node). Such
assignment of transmission ranges is called complete. We also consider the problem of achieving
strongly connected bounded diameter networks.
For the case of n + 1 colinear points at unit distance apart (the unit chain) we give a tight
asymptotic bound for the minimum cost of a range assignment of diameter h when h is a xed
constant and when h>(1 + ) log n, for some constant > 0. When the distances between the
colinear points are arbitrary, we give an O(n4) time dynamic programming algorithm for nding
a minimum cost complete range assignment.
For points in three dimensions we show that the problem of deciding whether a complete
range assignment of a given cost exists, is NP-hard. For the same problem we give an O(n2)
time approximation algorithm which provides a complete range assignment with cost within a
factor of two of the minimum. The complexity of this problem in two dimensions remains open,
while the approximation algorithm works in this case as well.
Abstract: Wireless networks have received significant attention during the recent years. Especially, ad hoc wireless networks emerged due to their potential applications in battlefield, emergency disaster relief, etc. [13]. Unlike traditional wired networks or cellular wireless networks, no wired backbone infrastructure is installed for ad hoc wireless networks.
Abstract: Wireless sensor networks are composed of a vast number of ultra-small, fully autonomous computing, communication, and sensing devices, with very restricted energy and computing capabilities, that cooperate to accomplish a large sensing task. Such networks can be very useful in practice. The authors propose extended versions of two data propagation protocols: the Sleep-Awake Probabilistic Forwarding (SW-PFR) protocol and the Hierarchical Threshold-Sensitive Energy-Efficient Network (H-TEEN) protocol. These nontrivial extensions aim at improving the performance of the original protocols by introducing sleep-awake periods in the PFR case to save energy and introducing a hierarchy of clustering in the TEEN case to better cope with large network areas. The authors implemented the two protocols and performed an extensive comparison via simulation of various important measures of their performance with a focus on energy consumption. Data propagation under this approach exhibits high fault tolerance and increases network lifetime.
Abstract: Wireless sensor networks are composed of a vast number of ultra-small, fully autonomous computing, communication, and sensing devices, with very restricted energy and computing capabilities, that cooperate to accomplish a large sensing task. Such networks can be very useful in practice. The authors propose extended versions of two data propagation protocols: the Sleep-Awake Probabilistic Forwarding (SW-PFR) protocol and the Hierarchical Threshold-Sensitive Energy-Efficient Network (H-TEEN) protocol. These nontrivial extensions aim at improving the performance of the original protocols by introducing sleep-awake periods in the PFR case to save energy and introducing a hierarchy of clustering in the TEEN case to better cope with large network areas. The authors implemented the two protocols and performed an extensive comparison via simulation of various important measures of their performance with a focus on energy consumption. Data propagation under this approach exhibits high fault tolerance and increases network lifetime.
Abstract: Wireless sensor and actor networks are comprised of a large number of small, fully autonomous computing, communication, sensing and actuation devices, with very restricted energy and computing capabilities. Such devices co-operate to accomplish a large sensing and acting task. Sensors gather information for an event in the physical world and notify the actors that perform appropriate actions by making a decision on receipt of the sensed information. Such networks can be very useful in practice i.e.~in the local detection of remote crucial events and the propagation of relevant data to decision
centers that perform appropriate actions upon the environment, thus realizing sensing and acting from a distance.
In this work we present a communication protocol that enables scalable, energy efficient and fault tolerant coordination while allowing to prioritize sensing tasks in situated wireless sensor and actor networks. The sensors react locally on environment and context changes and interact with each other in order to adjust the performance of the network in terms of energy, latency and success rate on a per-task basis. To deal with the increased complexity of such large-scale systems, our protocol pulls some additional knowledge about the network in order to subsequently improve data forwarding towards the actors.
We implement and evaluate the protocol using large scale simulation, showing its suitability in networks where sensor to actor and actor to actor coordination are important for accomplishing tasks of different priorities.
Abstract: In this work we focus on the energy efficiency challenge in wireless sensor networks, from both an on-line perspective (related to routing), as well as a network design perspective (related to tracking). We investigate a few representative, important aspects of energy efficiency: a) the robust and fast data propagation b) the problem of balancing the energy
dissipation among all sensors in the network and c) the problem of efficiently tracking moving
entities in sensor networks. Our work here is a methodological survey of selected results that
have alre dy appeared in the related literature.
In particular, we investigate important issues of energy optimization, like minimizing the total
energy dissipation, minimizing the number of transmissions as well as balancing the energy
load to prolong the system˘s lifetime. We review characteristic protocols and techniques in the recent literature, including probabilistic forwarding and local optimization methods. We study the problem of localizing and tracking multiple moving targets from a network design perspective i.e. towards estimating the least possible number of sensors, their positions and operation characteristics needed to efficiently perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps. Under this model, we abstract the tracking network design problem by a covering combinatorial problem and then design and analyze an efficient approximate method for sensor placement
and operation.
Abstract: In this work we present three new distributed, probabilistic data propagation protocols for Wireless Sensor Networks which aim at maximizing the network's operational life and improve its performance. The keystone of these protocols' design is fairness which declares that fair portions of network's work load should be assigned to each node, depending on their role in the system. All the three protocols, EFPFR, MPFR and TWIST, emerged from the study of the rigorously analyzed protocol PFR. Its design elements were identified and improvements were suggested and incorporated into the introduced protocols. The experiments conducted show that our proposals manage to improve PFR's performance in terms of success rate, total amount of energy saved, number of alive sensors and standard deviation of the energy left. Indicatively we note that while PFR's success rate is 69.5%, TWIST is achieving 97.5% and its standard deviation of energy is almost half of that of PFR.
Abstract: Grids offer a transparent interface to geographically scattered computation, communication, storage and
other resources. In this chapter we propose and evaluate QoS-aware and fair scheduling algorithms for
Grid Networks, which are capable of optimally or near-optimally assigning tasks to resources, while taking
into consideration the task characteristics and QoS requirements. We categorize Grid tasks according to
whether or not they demand hard performance guarantees. Tasks with one or more hard requirements are
referred to as Guaranteed Service (GS) tasks, while tasks with no hard requirements are referred to as Best
Effort (BE) tasks. For GS tasks, we propose scheduling algorithms that provide deadline or computational
power guarantees, or offer fair degradation in the QoS such tasks receive in case of congestion. Regarding
BE tasks our objective is to allocate resources in a fair way, where fairness is interpreted in the max-min fair
share sense. Though, we mainly address scheduling problems on computation resources, we also look at
the joint scheduling of communication and computation resources and propose routing and scheduling
algorithms aiming at co-allocating both resource type so as to satisfy their respective QoS requirements.
Abstract: This research further investigates the recently introduced
(in [4]) paradigm of radiation awareness in ambient environments with abundant heterogeneous wireless networking
from a distributed computing perspective. We call radiation
at a point of a wireless network the total amount of electromagnetic quantity the point is exposed to; our denition incorporates the eect of topology as well as the time domain
and environment aspects. Even if the impact of radiation to
human health remains largely unexplored and controversial,
we believe it is worth trying to understand and control, in
a way that does not decrease much the quality of service
oered to users of the wireless network.
In particular, we here focus on the fundamental problem
of ecient data propagation in wireless sensor networks, try-
ing to keep latency low while maintaining at low levels the
radiation cumulated by wireless transmissions. We rst propose greedy and oblivious routing heuristics that are radiation aware. We then combine them with temporal back-o
schemes that use local properties of the network (e.g. number of neighbours, distance from sink) in order to spread" radiation in a spatio-temporal way. Our proposed radiation
aware routing heuristics succeed to keep radiation levels low,
while not increasing latency.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters, by exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. A Radiocoloring (RC) of a graph G(V,E) is an assignment function View the MathML source such that |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted2, when u,v are neighbors in G, and |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted1 when the distance of u,v in G is two. The number of discrete frequencies and the range of frequencies used are called order and span, respectively. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span or the order. In this paper we prove that the radiocoloring problem for general graphs is hard to approximate (unless NP=ZPP) within a factor of n1/2-{\aa} (for any View the MathML source), where n is the number of vertices of the graph. However, when restricted to some special cases of graphs, the problem becomes easier. We prove that the min span RCP is NP-complete for planar graphs. Next, we provide an O(n{\"A}) time algorithm (|V|=n) which obtains a radiocoloring of a planar graph G that approximates the minimum order within a ratio which tends to 2 (where {\"A} the maximum degree of G). Finally, we provide a fully polynomial randomized approximation scheme (fpras) for the number of valid radiocolorings of a planar graph G with {\"e} colors, in the case where {\"e}greater-or-equal, slanted4{\"A}+50.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters, by exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. A Radiocoloring (RC) of a graph G(V,E) is an assignment function View the MathML source such that |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted2, when u,v are neighbors in G, and |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted1 when the distance of u,v in G is two. The number of discrete frequencies and the range of frequencies used are called order and span, respectively. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span or the order. In this paper we prove that the radiocoloring problem for general graphs is hard to approximate (unless NP=ZPP) within a factor of n1/2-{\aa} (for any View the MathML source), where n is the number of vertices of the graph. However, when restricted to some special cases of graphs, the problem becomes easier. We prove that the min span RCP is NP-complete for planar graphs. Next, we provide an O(n{\"A}) time algorithm (|V|=n) which obtains a radiocoloring of a planar graph G that approximates the minimum order within a ratio which tends to 2 (where {\"A} the maximum degree of G). Finally, we provide a fully polynomial randomized approximation scheme (fpras) for the number of valid radiocolorings of a planar graph G with {\"e} colors, in the case where {\"e}greater-or-equal, slanted4{\"A}+50.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. A Radiocoloring (RC) of a graph G(V,E) is an assignment function View the MathML source such that |{\"E}(u)−{\"E}(v)|greater-or-equal, slanted2, when u,v are neighbors in G, and |{\"E}(u)−{\"E}(v)|greater-or-equal, slanted1 when the distance of u,v in G is two. The discrete number of frequencies used is called order and the range of frequencies used, span. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span (min span RCP) or the order (min order RCP).
In this paper, we deal with an interesting, yet not examined until now, variation of the radiocoloring problem: that of satisfying frequency assignment requests which exhibit some periodic behavior. In this case, the interference graph (modelling interference between transmitters) is some (infinite) periodic graph. Infinite periodic graphs usually model finite networks that accept periodic (in time, e.g. daily) requests for frequency assignment. Alternatively, they can model very large networks produced by the repetition of a small graph.
A periodic graph G is defined by an infinite two-way sequence of repetitions of the same finite graph Gi(Vi,Ei). The edge set of G is derived by connecting the vertices of each iteration Gi to some of the vertices of the next iteration Gi+1, the same for all Gi. We focus on planar periodic graphs, because in many cases real networks are planar and also because of their independent mathematical interest.
We give two basic results:
• We prove that the min span RCP is PSPACE-complete for periodic planar graphs.
• We provide an O(n({\"A}(Gi)+{\'o})) time algorithm (where|Vi|=n, {\"A}(Gi) is the maximum degree of the graph Gi and {\'o} is the number of edges connecting each Gi to Gi+1), which obtains a radiocoloring of a periodic planar graph G that approximates the minimum span within a ratio which tends to View the MathML source as {\"A}(Gi)+{\'o} tends to infinity.
We remark that, any approximation algorithm for the min span RCP of a finite planar graph G, that achieves a span of at most {\'a}{\"A}(G)+constant, for any {\'a} and where {\"A}(G) is the maximum degree of G, can be used as a subroutine in our algorithm to produce an approximation for min span RCP of asymptotic ratio {\'a} for periodic planar graphs.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. The Radiocoloring (RC) of a graph G(V,E) is an assignment function {\"O}: V → IN such that ∣{\"O}(u) - {\"O}(v)∣ ≥2, when u, v are neighbors in G, and ∣{\"O}(u) - {\"O}(v)∣ ≥1 when the distance of u, v in G is two. The range of frequencies used is called span. Here, we consider the optimization version of the Radiocoloring Problem (RCP) of finding a radiocoloring assignment of minimum span, called min span RCP. In this paper, we deal with a variation of RCP: that of satisfying frequency assignment requests with some periodic behavior. In this case, the interference graph is an (infinite) periodic graph. Infinite periodic graphs model finite networks that accept periodic (in time, e.g. daily) requests for frequency assignment. Alternatively, they may model very large networks produced by the repetition of a small graph. A periodic graph G is defined by an infinite two-way sequence of repetitions of the same finite graph G i (V i ,E i ). The edge set of G is derived by connecting the vertices of each iteration G i to some of the vertices of the next iteration G i +1, the same for all G i . The model of periodic graphs considered here is similar to that of periodic graphs in Orlin [13], Marathe et al [10]. We focus on planar periodic graphs, because in many cases real networks are planar and also because of their independent mathematical interest. We give two basic results: - We prove that the min span RCP is PSPACE-complete for periodic planar graphs. - We provide an O(n({\"A}(G i ) + {\'o})) time algorithm, (where ∣V i ∣ = n, {\"A}(G i ) is the maximum degree of the graph G i and {\'o} is the number of edges connecting each G i to G i +1), which obtains a radiocoloring of a periodic planar graph G that approximates the minimum span within a ratio which tends to 2 as {\"A}(Gi) + {\'o} tends to infinity.
Abstract: In this work we address the issue of efficient processing of range queries in DHT-based P2P data networks. The novelty of the proposed approach lies on architectures, algorithms, and mechanisms for identifying and appropriately exploiting powerful nodes in such networks. The existence of such nodes has been well documented in the literature and plays a key role in the architecture of most successful real-world P2P applications. However, till now, this heterogeneity has not been taken into account when architecting solutions for complex query processing, especially in DHT networks. With this work we attempt to fill this gap for optimizing the processing of range queries. Significant performance improvements are achieved due to (i) ensuring a much smaller hop count performance for range queries, and (ii) avoiding the dangers and inefficiencies of relying for range query processing on weak nodes, with respect to processing, storage, and communication capacities, and with intermittent connectivity. We present detailed experimental results validating our performance claims.
Abstract: We consider the problem of planning a mixed line rate
(MLR) wavelength division multiplexing (WDM) transport
optical network. In such networks, different modulation formats
are usually employed to support transmission at different line
rates. Previously proposed planning algorithms have used a
transmission reach bound for each modulation format/line rate,
mainly driven by single line rate systems. However, transmission
experiments in MLR networks have shown that physical layer
interference phenomena are more severe among transmissions
that utilize different modulation formats. Thus, the transmission
reach of a connection with a specific modulation format/line rate
depends also on the other connections that co-propagate with it
in the network. To plan a MLR WDM network, we present
routing and wavelength assignment (RWA) algorithms that
adapt the transmission reach of each connection according to the
use of the modulation formats/line rates in the network. The
proposed algorithms are able to plan the network so as to
alleviate cross-rate interference effects, enabling the
establishment of connections of acceptable quality over paths that
would otherwise be prohibited.
Abstract: The population protocol model (PP) proposed by Angluin et al. [2] describes sensor networks consisting of passively mobile finite-state agents. The agents sense their environment and communicate in pairs to carry out some computation on the sensed values. The mediated population protocol model (MPP) [13] extended the PP model by communication links equipped with a constant size buffer. The MPP model was proved in [13] to be stronger than the PP model. However, its most important contribution is that it provides us with the ability to devise optimizing protocols, approximation protocols and protocols that decide properties of the communication graph on which they run. The latter case, suggests a simplified model, the GDM model, that was formally defined and studied in [11]. GDM is a special case of MPP that captures MPP's ability to decide properties of the communication graph. Here we survey recent advances in the area initiated by the proposal of the PP model and at the same time we provide new protocols, novel ideas and results.
Abstract: In this paper we present a signaling protocol for
QoS differentiation suitable for optical burst switching networks.
The proposed protocol is a two-way reservation scheme that
employs delayed and in-advance reservation of resources. In this
scheme delayed reservations may be relaxed, introducing a
reservation duration parameter that is negotiated during call
setup phase. This feature allows bursts to reserve resources
beyond their actual size to increase their successful forwarding
probability and is used to provide QoS differentiation. The
proposed signaling protocol offers a low blocking probability for
bursts that can tolerate the round-trip delay required for the
reservations. We present the main features of the protocol and
describe in detail timing considerations regarding the call setup
and the reservation process. We also describe several methods
for choosing the protocol parameters so as to optimize
performance and present corresponding evaluation results.
Furthermore, we compare the performance of the proposed
protocol against that of two other typical reservation protocols, a
Tell-and-Wait and a Tell-and-Go protocol.
Abstract: We consider the conflicting problems of ensuring data-access load balancing and efficiently processing range queries on peer-to-peer data networks maintained over Distributed Hash Tables (DHTs). Placing consecutive data values in neighboring peers is frequently used in DHTs since it accelerates range query processing. However, such a placement is highly susceptible to load imbalances, which are preferably handled by replicating data (since replication also introduces fault tolerance benefits). In this paper, we present HotRoD, a DHT-based architecture that deals effectively with this combined problem through the use of a novel locality-preserving hash function, and a tunable data replication mechanism which allows trading off replication costs for fair load distribution. Our detailed experimentation study shows strong gains in both range query processing efficiency and data-access load balancing, with low replication overhead. To our knowledge, this is the first work that concurrently addresses the two conflicting problems using data replication.
Abstract: We consider the conflicting problems of ensuring data-access load balancing and efficiently processing range queries on peer-to-peer data networks maintained over Distributed Hash Tables (DHTs). Placing consecutive data values in neighboring peers is frequently used in DHTs since it accelerates range query processing. However, such a placement is highly susceptible to load imbalances, which are preferably handled by replicating data (since replication also introduces fault tolerance benefits). In this paper, we present HotRoD, a DHT-based architecture that deals effectively with this combined problem through the use of a novel locality-preserving hash function, and a tunable data replication mechanism which allows trading off replication costs for fair load distribution. Our detailed experimentation study shows strong gains in both range query processing efficiency and data-access load balancing, with low replication overhead. To our knowledge, this is the first work that concurrently addresses the two conflicting problems using data replication.
Abstract: Future Grid Networks should be able to provide Quality of Service (QoS) guarantees
to their users. In this work we examine the way Grid resources should be
configured so as to provide deterministic delay guarantees to Guaranteed Service
(GS) users and fairness to Best Effort (BE) users. The resources are partitioned
in groups that serve GS users only, or BE users only, or both types of users with
different priorities. Furthermore, the GS users are registered to the resources
either statically or dynamically, while both single and multi-Cpu resources are
examined. Finally the proposed resource configurations for providing QoS are
implemented in theGridSim environment and a number simulations are executed.
Our results indicate that the allocation of resources to both types of users, with
different priorities, results in fewer deadlines missed and better resources utilization.
Finally benefits can be derived from the dynamic registration of GS users
to the resources
Abstract: The ROTA project is aimed at offering high-speed network connections to ships traveling at archipelagos, i.e. seas with dense clusters of islands, such as the Aegean sea. The ROTA system is based on the use of directional antennas on-board the ships and stationary antennas on the islands. The stationary antennas on the islands are connected to high-speed commercial computer networks, which are available on practically all inhabited islands at the Aegean sea, thus implementing a Virtual Private Network (VPN). This approach is much more cost-effective when compared to current approaches that rely on the usage of satellite communications. This paper presents the concept behind ROTA platform.
Abstract: A key problem in networks that support advance reservations is the routing and time scheduling of connections with flexible starting time and known data transfer size. In this paper we present a multicost routing and scheduling algorithm for selecting the path to be followed by such a connection and the time the data should start and end transmission at each link so as to minimize the reception time at the destination, or optimize some other performance criterion. The utilization profiles of the network links, the link propagation delays, and the parameters of the connection to be scheduled form the inputs to the algorithm. We initially present a scheme of non-polynomial complexity to compute a set of so called non-dominated candidate paths, from which the optimal path can be found. We then propose two mechanisms to appropriately prune the set of candidate paths in order to find multicost routing and scheduling algorithms of polynomial complexity. We examine the performance of the algorithms in the special case of an Optical Burst Switched network. Our results indicate that the proposed polynomial-time algorithms have performance that is very close to that of the optimal algorithm. We also study the effects network propagation delays and link-state update policies have on performance.
Abstract: We propose QoS-aware scheduling algorithms for Grid Networks that are capable of optimally or near-optimally
assigning computation and communication tasks to grid resources. The routing and scheduling algorithms to be
presented take as input the resource utilization profiles and the task characteristics and QoS requirements, and
co-allocate resources while accounting for the dependencies between communication and computation tasks.
Keywords: communication and computation utilization profiles, multicost routing and scheduling, grid
computing.
Abstract: A key problem in networks that support advance reservations is the routing and time scheduling of
connections with flexible starting time and known data transfer size. In this paper we present a multicost
routing and scheduling algorithm for selecting the path to be followed by such a connection and the time the
data should start and end transmission at each link so as to minimize the reception time at the destination,
or optimize some other performance criterion. The utilization profiles of the network links, the link
propagation delays, and the parameters of the connection to be scheduled form the inputs to the algorithm.
We initially present a scheme of non-polynomial complexity to compute a set of so-called non-dominated
candidate paths, from which the optimal path can be found. We then propose two mechanisms to
appropriately prune the set of candidate paths in order to find multicost routing and scheduling algorithms of
polynomial complexity. We examine the performance of the algorithms in the special case of an Optical
Burst Switched network. Our results indicate that the proposed polynomial time algorithms have performance that is very close to that of the optimal algorithm. We also study the effects network
propagation delays and link-state update policies have on performance.
Abstract: A key problem in networks that support advance
reservations is the routing and time scheduling of connections
with flexible starting time. In this paper we present a multicost
routing and scheduling algorithm for selecting the path to be
followed by such a connection and the time the data should start
so as to minimize the reception time at the destination, or some
other QoS requirement. The utilization profiles of the network
links, the link propagation delays, and the parameters of the
connection to be scheduled form the inputs to the algorithm. We
initially present a scheme of non-polynomial complexity to
compute a set of so-called non-dominated candidate paths, from
which the optimal path can be found. By appropriately pruning
the set of candidate paths using path pseudo-domination
relationships, we also find multicost routing and scheduling
algorithms of polynomial complexity. We examine the
performance of the algorithms in the special case of an Optical
Burst Switched network. Our results indicate that the proposed
polynomial time algorithms have performance that it is very close
to that of the optimal algorithm.
Abstract: Orthogonal Frequency Division Multiplexing (OFDM)
has been recently proposed as a modulation technique for optical
networks, due to its good spectral efficiency and impairment
tolerance. Optical OFDM is much more flexible compared to
traditional WDM systems, enabling elastic bandwidth
transmissions. We consider the planning problem of an OFDMbased optical network where we are given a traffic matrix that
includes the requested transmission rates of the connections to be
served. Connections are provisioned for their requested rate by
elastically allocating spectrum using a variable number of OFDM
subcarriers. We introduce the Routing and Spectrum Allocation
(RSA) problem, as opposed to the typical Routing and
Wavelength Assignment (RWA) problem of traditional WDM
networks, and present various algorithms to solve the RSA. We
start by presenting an optimal ILP RSA algorithm that minimizes
the spectrum used to serve the traffic matrix, and also present a
decomposition method that breaks RSA into two substituent
subproblems, namely, (i) routing and (ii) spectrum allocation
(R+SA) and solves them sequentially. We also propose a heuristic
algorithm that serves connections one-by-one and use it to solve
the planning problem by sequentially serving all traffic matrix
connections. To feed the sequential algorithm, two ordering
policies are proposed; a simulated annealing meta-heuristic is also
proposed to obtain even better orderings. Our results indicate
that the proposed sequential heuristic with appropriate ordering
yields close to optimal solutions in low running times.
Abstract: In this paper we demonstrate the significant impact of the user mobility rates on the performance on two different approaches for designing routing protocols for ad-hoc mobile networks: (a) the route creation and maintenance approach and (b) the "support" approach, that forces few hosts to move acting as
"helpers" for message delivery. We study a set of representative protocols for each approach, i.e.~DSR and ZRP for the first approach and RUNNERS for the second. We have implemented the three protocols and performed a large scale and detailed simulation study of their performance. Our findings are: (i) DSR achieves low message delivery rates but manages to deliver messages very fast; (ii) ZRP behaves well in networks of low mobility rate, while its performance drops for networks of highly mobile users; (iii) RUNNERS seem to tolerate well (and in fact benefit from) high mobility rates.
Based on our investigation, we design and implement two new protocols that result from the synthesis of the investigated routing approaches. We conducted an extensive, comparative simulation study of their performance. The new protocols behave well both in networks of diverse mobility motion rates, and in some cases they even outperform the original ones by achieving lower message delivery delays.
Abstract: Data propagation in wireless sensor
networks is usually performed as a multihop process.
Thus,
To deliver a single
message, the resources of many sensor nodes are used and
a lot of energy is spent.
Recently, a novel approach is catching momentum because of important applications;
that of having a mobile sink move inside the network area and collect
the data with low energy cost.
Here we extend this line of research by proposing and evaluating three new protocols.
Our protocols are novel in
a) investigating the impact of having {many} mobile sinks
b) in weak models with restricted mobility, proposing and evaluating
a mix of static and mobile sinks and c) proposing a distributed
protocol that tends to {equally spread the sinks} in the network to
further improve performance.
Our protocols are simple, based on randomization and assume locally
obtainable information. We perform an extensive evaluation via simulation; our
findings demonstrate that our solutions scale very well with respect to the number of sinks
and significantly reduce energy consumption and delivery delay.
Abstract: We present SeAl1, a novel data/resource and data-access management infrastructure designed for the purpose of addressing a key problem in P2P data sharing networks, namely the problem of wide-scale selfish peer behavior. Selfish behavior has been manifested and well documented and it is widely accepted that unless this is dealt with, the scalability, efficiency, and the usefulness of P2P sharing networks will be diminished. SeAl essentially consists of a monitoring/accounting subsystem, an auditing/verification subsystem, and incentive mechanisms. The monitoring subsystem facilitates the classification of peers into selfish/altruistic. The auditing/verification layer provides a shield against perjurer/slandering and colluding peers that may try to cheat the monitoring subsystem. The incentives mechanisms efectively utilize these layers so to increase the computational/networking and data resources that are available to the community. Our extensive performance results show that SeAl performs its tasks swiftly, while the overhead introduced by our accounting and auditing mechanisms in terms of response time, network, and storage overheads are very small.
Abstract: We present SeAl, a novel data/resource and data-access management infrastructure designed for the purpose of addressing a key problem in P2P data sharing networks, namely the problem of wide-scale selfish peer behavior. Selfish behavior has been manifested and well documented and it is widely accepted that unless this is dealt with, the scalability, efficiency, and the usefulness of P2P sharing networks will be diminished. SeAl essentially consists of a monitoring/accounting subsystem, an auditing/verification subsystem, and incentive mechanisms. The monitoring subsystem facilitates the classification of peers into selfish/altruistic. The auditing/verification layer provides a shield against perjurer/slandering and colluding peers that may try to cheat the monitoring subsystem. The incentives mechanisms effectively utilize these layers so to increase the computational/networking and data resources that are available to the community. Our extensive performance results show that SeAl performs its tasks swiftly, while the overhead introduced by our accounting and auditing mechanisms in terms of response time, network, and storage overheads are very small.
Abstract: In this Phd thesis,, we try to use formal logic and threshold phenomena that asymptotically emerge with certainty in order to build new trust models and to evaluate the existing one. The departure point of our work is that dynamic, global computing systems are not amenable to a static viewpoint of the trust concept, no matter how this concept is formalized. We believe that trust should be a statistical, asymptotic concept to be studied in the limit as the system's components grow according to some growth rate. Thus, our main goal is to define trust as an emerging system property that ``appears'' or "disappears" when a set of properties hold, asymptotically with probability$ 0$ or $1$ correspondingly . Here we try to combine first and second order logic in order to analyze the trust measures of specific network models. Moreover we can use formal logic in order to determine whether generic reliability trust models provide a method for deriving trust between peers/entities as the network's components grow. Our approach can be used in a wide range of applications, such as monitoring the behavior of peers, providing a measure of trust between them, assessing the level of reliability of peers in a network. Wireless sensor networks are comprised of a vast number of ultra-small autonomous computing, communication and sensing devices, with restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Sensor networks can be very useful in practice. Such systems should at least guarantee the confidentiality and integrity of the information reported to the controlling authorities regarding the realization of environmental events. Therefore, key establishment is critical for the protection in wireless sensor networks and the prevention of adversaries from attacking the network. Finally in this dissertation we also propose three distributed group key establishment protocols suitable for such energy constrained networks. This dissertation is composed of two parts. Part I develops the theory of the first and second order logic of graphs - their definition, and the analysis of their properties that are expressible in the {\em first order language} of graphs. In part II we introduce some new distributed group key establishment protocols suitable for sensor networks. Several key establishment schemes are derived and their performance is demonstrated.
Abstract: What is the price of anarchy when unsplittable demands are routed
selfishly in general networks with load-dependent edge delays? Motivated by
this question we generalize the model of [14] to the case of weighted congestion
games.We show that varying demands of users crucially affect the nature of these
games, which are no longer isomorphic to exact potential games, even for very
simple instances. Indeed we construct examples where even a single-commodity
(weighted) network congestion game may have no pure Nash equilibrium.
On the other hand, we study a special family of networks (which we call the
-layered networks) and we prove that any weighted congestion game on such
a network with resource delays equal to the congestions, possesses a pure Nash
Equilibrium. We also show how to construct one in pseudo-polynomial time.
Finally, we give a surprising answer to the question above for such games: The
price of anarchy of any weighted -layered network congestion game with m
edges and edge delays equal to the loads, is {\`E} logm
log logm.
Abstract: Our position is that a key to research efforts on ensuring high
performance in very large scale sharing networks is the idea of
volunteering; recognizing that such networks are comprised of
largely heterogeneous nodes in terms of their capacity and
behaviour, and that, in many real-world manifestations, a few
nodes carry the bulk of the request service load. In this paper we
outline how we employ volunteering as the basic idea using
which we develop altruism-endowed self-organizing sharing
networks to help solve two open problems in large-scale peer-topeer
networks: (i) to develop an overlay topology structure that
enjoys better performance than DHT-structured networks and,
specifically, to offer O(log log N) routing performance in a
network of N nodes, instead of O(log N), and (ii) to efficiently
process complex queries and range queries, in particular.
Abstract: In this work, we study protocols so that populations of distributed processes can construct networks. In order to highlight the basic principles of distributed network construction, we keep the model minimal in all respects. In particular, we assume finite-state processes that all begin from the same initial state and all execute the same protocol. Moreover, we assume pairwise interactions between the processes that are scheduled by a fair adversary. In order to allow processes to construct networks, we let them activate and deactivate their pairwise connections. When two processes interact, the protocol takes as input the states of the processes and the state of their connection and updates all of them. Initially all connections are inactive and the goal is for the processes, after interacting and activating/deactivating connections for a while, to end up with a desired stable network. We give protocols (optimal in some cases) and lower bounds for several basic network construction problems such as spanning line, spanning ring, spanning star, and regular network. The expected time to convergence of our protocols is analyzed under a uniform random scheduler. Finally, we prove several universality results by presenting generic protocols that are capable of