Abstract: In this work we propose a new unified PON
RAN architecture for LTE mobile backhaul networks,
employing ringbased WDM PONs. The proposed
architecture supports dynamic setup of virtual circ
uits for interbase station communication, over a dedicated
λ
LAN channel. The reservation mechanism is arbitrated by the OLT, which also monitors the traffic imbalances of downstream channels. The proposed architecture also supports load balancing, by dynamically reallocatin
g and sharing the capacity of the downstream wavelengths.

Abstract: Core networks of the future will have a
translucent and eventually transparent optical
structure. Ultra-high-speed end-to-end connectiv-
ity with high quality of service and high reliability
will be realized through the exploitation of opti-
mized protocols and lightpath routing algorithms.
These algorithms will complement a flexible con-
trol and management plane integrated in the
proposed solution. Physical layer impairments
and optical performance are monitored and
incorporated in impairment-aware lightpath rout-
ing algorithms. These algorithms will be integrat-
ed into a novel dynamicnetwork planning tool
that will consider dynamic traffic characteristics,
a reconfigurable optical layer, and varying physi-
cal impairment and component characteristics.
The network planning tool along with extended
control planes will make it possible to realize the
vision of optical transparency. This article pre-
sents a novel framework that addresses dynamic
cross-layer network planning and optimization
while considering the development of a future
transport network infrastructure.

Abstract: We present a new dynamic graph structure specifically suited
for large-scale transportation networks that provides simultaneously three
unique features: compactness, agility and dynamicity. We demonstrate
its practicality and superiority by conducting an experimental study for
shortest route planning in large-scale European and US road networks
with a few dozen millions of nodes and edges. Our approach is the first
one that concerns the dynamic maintenance of a large-scale graph with
ordered elements using a contiguous memory part, and which allows an
arbitrary online reordering of its elements.

Abstract: In this paper, we present a new hybrid optical burst switch architecture (HOBS) that takes advantage of the pre-transmission idle
time during lightpath establishment. In dynamic circuit switching (wavelength routing) networks, capacity is immediately hardreserved
upon the arrival of a setup message at a node, but it is used at least a round-trip time delay later. This waste of resources
is significant in optical multi-gigabit networks and can be used to transmit traffic of a lower class of service in a non-competing
way. The proposed hybrid OBS architecture, takes advantage of this idle time to transmit one-way optical bursts of a lower class of
service, while high priority data explicitly requests and establishes end-to-end lightpaths. In the proposed scheme, the two control
planes (two-way and one-way OBS reservation) are merged, in the sense that each SETUP message, used for the two-way lightpath
establishment, is associated with one-way burst transmission and therefore it is modified to carry routing and overhead information
for the one-way traffic as well. In this paper, we present the main architectural features of the proposed hybrid scheme and further
we assess its performance by conducting simulation experiments on the NSF net backbone topology. The extensive network study
revealed that the proposed hybrid architecture can achieve and sustain an adequate burst transmission rate with a finite worst case
delay.

Abstract: We study the problem of fast and energy-efficient data collection of sensory data using a mobile sink, in wireless sensor networks in which both the sensors and the sink move. Motivated by relevant applications, we focus on dynamic sensory mobility and heterogeneous sensor placement. Our approach basically suggests to exploit the sensor motion to adaptively propagate information based on local conditions (such as high placement concentrations), so that the sink gradually “learns” the network and accordingly optimizes its motion. Compared to relevant solutions in the state of the art (such as the blind random walk, biased walks, and even optimized deterministic sink mobility), our method significantly reduces latency (the improvement ranges from 40% for uniform placements, to 800% for heterogeneous ones), while also improving the success rate and keeping the energy dissipation at very satisfactory levels.

Abstract: We investigate the problem of efficient data collection in wireless sensor networks where both the sensors and the sink move. We especially study the important, realistic case where the spatial distribution of sensors is non-uniform and their mobility is diverse and dynamic. The basic idea of our protocol is for the sink to benefit of the local information that sensors spread in the network as they move, in order
to extract current local conditions and accordingly adjust its trajectory. Thus, sensory motion anyway present in the network serves as a low cost replacement of network information propagation. In particular, we investigate two variations of our method: a)the greedy motion of the sink towards the region of highest density each time and b)taking into account the aggregate density in wider network regions. An extensive comparative evaluation to relevant data collection methods (both randomized and optimized deterministic), demonstrates that our approach achieves significant performance gains, especially in non-uniform placements (but also in uniform ones). In fact, the greedy version of our approach is more suitable in networks where the concentration regions appear in a spatially balanced manner, while the aggregate scheme is more appropriate in networks where the concentration areas are geographically correlated.

Abstract: Motivated by emerging applications, we consider sensor networks where the sensors themselves (not just the sinks) are mobile. Furthermore, we focus on mobility scenarios characterized by heterogeneous, highly changing mobility roles in the network. To capture these high dynamics of diverse sensory motion we propose a novel network parameter,
the mobility level, which, although simple and local, quite accurately takes into account both the spatial and speed characteristics of motion. We then propose adaptive data dissemination protocols that use the mobility level estimation to optimize performance, by basically exploiting high mobility (redundant message ferrying) as a cost-effective replacement of flooding, e.g. the sensors tend to dynamically propagate less data in the presence
of high mobility, while nodes of high mobility are favored for moving data around. These dissemination schemes are enhanced by a distance-sensitive probabilistic message flooding inhibition mechanism that further reduces communication cost, especially for fast nodes of high mobility level, and as distance to data destination decreases. Our simulation findings
demonstrate significant performance gains of our protocols compared to non-adaptive protocols, i.e. adaptation increases the success rate and reduces latency (even by 15%) while at the same time significantly reducing energy dissipation (in most cases by even 40%). Also, our adaptive schemes achieve significantly higher message delivery ratio and
satisfactory energy-latency trade-offs when compared to flooding when sensor nodes have
limited message queues.

Abstract: We introduce a new modelling assumption for wireless sensor networks, that of node redeployment (addition of sensor devices during protocol evolution) and we extend the modelling assumption of heterogeneity (having sensor devices of various types). These two features further increase the highly dynamic nature of such networks and adaptation becomes a powerful technique for protocol design. Under these modelling assumptions, we design, implement and evaluate a new power conservation scheme for efficient data propagation. Our scheme is adaptive: it locally monitors the network conditions (density, energy) and accordingly adjusts the sleep-awake schedules of the nodes towards improved operation choices. The scheme is simple, distributed and does not require exchange of control messages between nodes.
Implementing our protocol in software we combine it with two well-known data propagation protocols and evaluate the achieved performance through a detailed simulation study using our extended version of the network simulator ns-2. We focus on highly dynamic scenarios with respect to network density, traffic conditions and sensor node resources. We propose a new general and parameterized metric capturing the trade-offs between delivery rate, energy efficiency and latency. The simulation findings demonstrate significant gains (such as more than doubling the success rate of the well-known Directed Diffusion propagation protocol) and good trade-offs achieved. Furthermore, the redeployment of additional sensors during network evolution and/or the heterogeneous deployment of sensors, drastically improve (when compared to ``equal total power" simultaneous deployment of identical sensors at the start) the protocol performance (i.e. the success rate increases up to four times} while reducing energy dissipation and, interestingly, keeping latency low).

Abstract: Wireless Sensor Networks are by nature highly dynamic and communication between sensors is completely ad hoc, especially when mobile devices are part of the setup. Numerous protocols and applications proposed for such networks
operate on the assumption that knowledge of the neighborhood is a priori available to all nodes. As a result, WSN deployments need to use or implement from scratch a neighborhood discovery mechanism. In this work we present a new protocol based on adaptive periodic beacon exchanges. We totally avoid continuous beaconing by adjusting the rate of broadcasts using the concept of consistency over the understanding of neighborhood that nearby devices share. We propose, implement and evaluate our adaptive neighborhood discovery protocol over our experimental testbed and using large scale simulations. Our results indicate that the
new protocol operates more eciently than existing reference implementations while it provides valid information to applications that use it. Extensive performance evaluation indicates that it successfully reduces generated network traffic by 90% and increases network lifetime by 20% compared to existing mechanisms that rely on continuous beaconing.

Abstract: We study the problem of secure routing in wireless sensor networks where the sensors and the sink can move during the execution of remote monitoring applications and communication is not necessarily directed towards the sink. We present a new routing protocol that builds upon a collection of mechanisms so that the integrity and confidentiality of the information reported to the controlling authorities is secured. The mechanisms are simple to implement, rely only on local information and require O(1) storage per sensor. The protocol adapts to mobility and security challenges that may arise throughout the execution of the application. We take special care for wireless sensor networks that monitor dynamically changing environments and applications that require its operation for extended periods of time. APSR can detect when the current network conditions are about to change and becomes ready for adaption to the new conditions. We demonstrate how to deal with inside and outside attacks even when the network is adapting to internal and/or external events.

Abstract: Motivated by emerging applications, we consider sensor networks where the sensors themselves
(not just the sinks) are mobile. Furthermore, we focus on mobility
scenarios characterized by heterogeneous, highly changing mobility
roles in the network.
To capture these high dynamics of diverse sensory motion
we propose a novel network parameter, the mobility level, which, although
simple and local, quite accurately takes into account both the
spatial and speed characteristics of motion. We then propose
adaptive data dissemination protocols that use the
mobility level estimation to optimize performance, by basically
exploiting high mobility (redundant message ferrying) as a cost-effective
replacement of flooding, e.g., the sensors tend to dynamically propagate
less data in the presence of high mobility, while nodes of high mobility
are favored for moving data around.
These dissemination schemes are enhanced by a distance-sensitive
probabilistic message flooding inhibition mechanism that
further reduces communication cost, especially for fast nodes
of high mobility level, and as distance to data destination
decreases. Our simulation findings demonstrate significant
performance gains of our protocols compared to non-adaptive
protocols, i.e., adaptation increases the success rate and reduces
latency (even by 15\%) while at the same time significantly
reducing energy dissipation (in most cases by even 40\%).
Also, our adaptive schemes achieve significantly
higher message delivery ratio and satisfactory energy-latency
trade-offs when compared to flooding when sensor nodes have limited message queues.

Abstract: Motivated by emerging applications, we consider sensor networks where the sensors themselves
(not just the sinks) are mobile. We focus on mobility
scenarios characterized by heterogeneous, highly changing mobility
roles in the network.
To capture these high dynamics
we propose a novel network parameter, the mobility level, which, although
simple and local, quite accurately takes into account both the
spatial and speed characteristics of motion. We then propose
adaptive data dissemination protocols that use the
mobility level estimation to improve performance. By basically
exploiting high mobility (redundant message ferrying) as a cost-effective
replacement of flooding, e.g., the sensors tend to dynamically propagate
less data in the presence of high mobility, while nodes of high mobility
are favored for moving data around.
These dissemination schemes are enhanced by a distance-sensitive
probabilistic message flooding inhibition mechanism that
further reduces communication cost, especially for fast nodes
of high mobility level, and as distance to data destination
decreases. Our simulation findings demonstrate significant
performance gains of our protocols compared to non-adaptive
protocols, i.e., adaptation increases the success rate and reduces
latency (even by 15\%) while at the same time significantly
reducing energy dissipation (in most cases by even 40\%).
Also, our adaptive schemes achieve significantly
higher message delivery ratio and satisfactory energy-latency
trade-offs when compared to flooding when sensor nodes have limited message queues.

Abstract: We consider sensor networks where the sensor nodes are attached on entities that move in a highly dynamic, heterogeneous manner. To capture this mobility diversity we introduce a new network parameter, the direction-aware mobility
level, which measures how fast and close each mobile node is expected to get to the data destination (the sink). We then provide local, distributed data dissemination protocols
that adaptively exploit the node mobility to improve performance. In particular, "high" mobility is used as a low cost replacement for data dissemination (due to the ferrying of data), while in the case of "low" mobility either a) data propagation redundancy is increased (when highly mobile neighbors exist) or b) long-distance data transmissions are used (when the entire neighborhood is of low mobility) to accelerate data dissemination towards the sink. An extensive performance comparison to relevant methods from
the state of the art demonstrates signicant improvements i.e. latency is reduced by even 4 times while keeping energy dissipation and delivery success at very satisfactory levels.

Abstract: Wireless sensor networks are comprised of a vast number of
ultra-small autonomous computing, communication and sensing devices,
with restricted energy and computing capabilities, that co-operate
to accomplish a large sensing task. Such networks can be very useful
in practice, e.g.~in the local monitoring of ambient conditions and
reporting them to a control center. In this paper we propose a
distributed group key establishment protocol that uses mobile agents
(software) and is particularly suitable for energy constrained,
dynamically evolving ad-hoc networks. Our approach totally avoids
the construction and the maintenance of a distributed structure that
reflects the topology of the network. Moreover, it trades-off
complex message exchanges by performing some amount of additional
local computations in order to be applicable at dense and dynamic
sensor networks. The extra computations are simple for the devices
to implement and are evenly distributed across the participants of
the network leading to good energy balance. We evaluate the
performance of our protocol in a simulated environment and compare
our results with existing group key establishment protocols. The
security of the protocol is based on the Diffie-Hellman problem and
we used in our experiments its elliptic curve analog. Our findings
basically indicate the feasibility of implementing our protocol in
real sensor network devices and highlight the advantages and
disadvantages of each approach given the available technology and
the corresponding efficiency (energy, time) criteria.

Abstract: We investigate the problem of efficient data collection in wireless sensor networks where both the sensors and the sink move. We especially study the important, realistic case where the spatial distribution of sensors is non-uniform and their mobility is diverse and dynamic. The basic idea of our protocol is for the sink to benefit of the local information that sensors spread in the network as they move, in order to extract current local conditions and accordingly adjust its trajectory. Thus, sensory motion anyway present in the network serves as a low cost replacement of network information propagation. In particular, we investigate two variations of our method: a) the greedy motion of the sink towards the region of highest density each time and b) taking into account the aggregate density in wider network regions. An extensive comparative evaluation to relevant data collection methods (both randomized and optimized deterministic), demonstrates that our approach achieves significant performance gains, especially in non-uniform placements (but also in uniform ones). In fact, the greedy version of our approach is more suitable in networks where the concentration regions appear in a spatially balanced manner, while the aggregate scheme is more appropriate in networks where the concentration areas are geographically correlated. We also investigate the case of multiple sinks by suggesting appropriate distributed coordination methods.

Abstract: We discuss some new algorithmic and complexity issues in
coalitional and dynamic/evolutionary games, related to the understand-
ing of modern selŻsh and Complex networks.
In particular: (a) We examine the achievement of equilibria via natural
distributed and greedy approaches in networks. (b) We present a model
of a coalitional game in order to capture the anarchy cost and complexity
of constructing equilibria in such situations. (c) We propose a stochastic
approach to some kinds of local interactions in networks, that can be
viewed also as extensions of the classical evolutionary game theoretic
setting.

Abstract: In this paper, we consider the problem of energy balanced data propagation in wireless sensor networks and we generalise previous works by allowing realistic energy assignment. A new modelisation of the process of energy consumption as a random walk along with a new analysis are proposed. Two new algorithms are presented and analysed. The first one is easy to implement and fast to execute. However, it needs a priori assumptions on the process generating data to be propagated. The second algorithm overcomes this need by inferring information from the observation of the process. Furthermore, this algorithm is based on stochastic estimation methods and is adaptive to environmental changes. This represents an important contribution for propagating energy balanced data in wireless sensor netwoks due to their highly dynamic nature.

Abstract: In this paper we study the problem of basic communication
in ad-hoc mobile networks where the deployment area changes in a
highly dynamic way and is unknown. We call such networks
highly changing ad-hoc mobile networks.
For such networks we investigate an efficient communication protocol which extends
the idea (introduced in [WAE01,POMC01]) of exploiting the co-ordinated
motion of a small part of an ad-hoc mobile
network (the ``runners support") to achieve
very fast communication between any two mobile users of the network.
The basic idea of the new protocol presented here is, instead
of using a fixed sized support for the whole duration of the protocol,
to employ a support of some initial (small) size which
adapts (given some time which can be made fast enough) to the
actual levels of traffic and the
(unknown and possibly rapidly changing) network area by
changing its size in order to converge to an optimal size,
thus satisfying certain Quality of Service criteria.
We provide here some proofs of correctness and fault tolerance
of this adaptive approach and we also provide analytical results
using Markov Chains and random walk techniques to show that such
an adaptive approach is, for this class of ad-hoc mobile networks, significantly more efficient than a simple non-adaptive
implementation of the basic ``runners support" idea.

Abstract: We introduce a new modelling assumption in wireless sensor networks, that of node redeployment (addition of sensor devices during the protocol evolution) and we extend the modelling assumption of heterogeneity (having sensor devices of various types). These two features further increase the highly dynamic nature of such networks and adaptation becomes a powerful technique for protocol design. Under this model, we design, implement and evaluate a power conservation scheme for efficient data propagation. Our protocol is adaptive: it locally monitors the network conditions (density, energy) and accordingly adjusts the sleep-awake schedules of the nodes towards best operation choices. Our protocol operates does not require exchange of control messages between nodes to coordinate.Implementing our protocol we combine it with two well-known data propagation protocols and evaluate the achieved performance through a detailed simulation study using our extended version of Ns2. We focus in highly dynamic scenarios with respect to network density, traffic conditions and sensor node resources. We propose a new general and parameterized metric capturing the trade-off between delivery rate, energy efficiency and latency. The simulation findings demonstrate significant gains (such as more than doubling the success rate of the well-known Directed Diffusion propagation paradigm) and good trade-offs. Furthermore, redeployment of sensors during network evolution and/or heterogeneous deployment of sensors drastically improve (when compared to equal total "power" simultaneous deployment of identical sensors at the start) the protocol performance (the success rate increases up to four times while reducing energy dissipation and, interestingly, keeping latency low).

Abstract: Avertical perspective, ranging from management
and routing to physical layer options, concerning dynamicnetwork monitoring and compensation of impairments
(M&C),is given.Feasibility, reliability,and performance
improvements on reconﬁgurable transparent networksare
expected to arise from the consolidated assessment of network management and control speciﬁcations, as a more accurate evaluation of available M&C techniques. In the network
layer,physical parameters aware algorithms are foreseen to
pursue reliable network performance. In the physical layer,
some new M&C methods were developed and rating of the state-of-the-art reported in literature is given. Optical monitoring implementation and viability is discussed.

Abstract: An ad-hoc mobile network is a collection of mobile hosts, with wireless communication capability, forming a temporary network without the aid of any established fixed infrastructure. In such a (dynamically changing) network it is not at all easy to avoid broadcasting (and flooding).
In this paper we propose, theoretically analyse and experimentally validate a new and efficient protocol for pairwise communication. The protocol exploits the co-ordinated motion of a small part of the network (i.e. it is a semi-compulsory protocol) in order to provide to various senders and receivers an efficient support for message passing. Our implementation platform is the LEDA system and we have tested the protocol for three classes of graphs (grids, random graphs and bipartite multi-stage graphs) each ing a different ?motion topology?.
Our theoretical analysis (based on properties of random walks) and our experimental measurements indicate that only a small fraction of the mobile stations are enough to be exploited by the support in order to achieve very fast communication between any pair of mobile stations.

Abstract: We consider algorithmic questions concerning the existence,
tractability and quality of atomic congestion games, among users that
are considered to participate in (static) selfish coalitions. We carefully
define a coalitional congestion model among atomic players.
Our findings in this model are quite interesting, in the sense that we
demonstrate many similarities with the non–cooperative case. For example,
there exist potentials proving the existence of Pure Nash Equilibria
(PNE) in the (even unrelated) parallel links setting; the Finite Improvement
Property collapses as soon as we depart from linear delays, but
there is an exact potential (and thus PNE) for the case of linear delays,
in the network setting; the Price of Anarchy on identical parallel
links demonstrates a quite surprising threshold behavior: it persists on
being asymptotically equal to that in the case of the non–cooperative
KP–model, unless we enforce a sublogarithmic number of coalitions.
We also show crucial differences, mainly concerning the hardness of algorithmic
problems that are solved efficiently in the non–cooperative case.
Although we demonstrate convergence to robust PNE, we also prove the
hardness of computing them. On the other hand, we can easily construct
a generalized fully mixed Nash Equilibrium. Finally, we propose a new
improvement policy that converges to PNE that are robust against (even
dynamically forming) coalitions of small size, in pseudo–polynomial time.
Keywords. Game Theory, Atomic Congestion Games, Coalitions, Convergence
to Equilibria, Price of Anarchy.

Abstract: We consider algorithmic questions concerning the existence, tractability and quality of Nash equi-
libria, in atomic congestion games among users participating in selsh coalitions.
We introduce a coalitional congestion model among atomic players and demonstrate many in-
teresting similarities with the non-cooperative case. For example, there exists a potential function
proving the existence of Pure Nash Equilibria (PNE) in the unrelated parallel links setting; in
the network setting, the Finite Improvement Property collapses as soon as we depart from linear
delays, but there is an exact potential (and thus PNE) for linear delays; the Price of Anarchy on
identical parallel links demonstrates a quite surprising threshold behavior: it persists on being
asymptotically equal to that in the case of the non-cooperative KP-model, unless the number of
coalitions is sublogarithmic.
We also show crucial dierences, mainly concerning the hardness of algorithmic problems that
are solved eciently in the non{cooperative case. Although we demonstrate convergence to robust
PNE, we also prove the hardness of computing them. On the other hand, we propose a generalized
fully mixed Nash Equilibrium, that can be eciently constructed in most cases. Finally, we
propose a natural improvement policy and prove its convergence in pseudo{polynomial time to
PNE which are robust against (even dynamically forming) coalitions of small size.

Abstract: We consider algorithmic questions concerning the existence,
tractability and quality of atomic congestion games, among users that
are considered to participate in (static) selfish coalitions. We carefully
define a coalitional congestion model among atomic players.
Our findings in this model are quite interesting, in the sense that we
demonstrate many similarities with the non–cooperative case. For example,
there exist potentials proving the existence of Pure Nash Equilibria
(PNE) in the (even unrelated) parallel links setting; the Finite Improvement
Property collapses as soon as we depart from linear delays, but
there is an exact potential (and thus PNE) for the case of linear delays,
in the network setting; the Price of Anarchy on identical parallel
links demonstrates a quite surprising threshold behavior: it persists on
being asymptotically equal to that in the case of the non–cooperative
KP–model, unless we enforce a sublogarithmic number of coalitions.
We also show crucial differences, mainly concerning the hardness of algorithmic
problems that are solved efficiently in the non–cooperative case.
Although we demonstrate convergence to robust PNE, we also prove the
hardness of computing them. On the other hand, we can easily construct
a generalized fully mixed Nash Equilibrium. Finally, we propose a new
improvement policy that converges to PNE that are robust against (even
dynamically forming) coalitions of small size, in pseudo–polynomial time.
Keywords. Game Theory, Atomic Congestion Games, Coalitions, Convergence
to Equilibria, Price of Anarchy.

Abstract: In this paper, we present BAD, an application-level multi-
cast infrastructure. BAD is designed to improve the perfor-
mance of multicast dissemination trees, under both a static
and a dynamic environment, where the eective bandwidth
of the network links changes with time. Its main goal is
to improve the data rate that end users perceive during
a multicast operation. BAD can be used for the creation
and management of multicast groups. It can be deployed
over any DHT retaining its fundamental advantages of band-
width improvement. BAD consists of a suit of algorithms
for node joins/leaves, bandwidth distribution to heteroge-
neous nodes, tree rearrangement and reduction of overhead.
We have implemented BAD within the FreePastry system.
We report on the results of a detailed performance evalua-
tion which testies for BAD's eciency and low overhead.
Specically, our experiments show that the improvement on
the minimum bandwidth ranges from 40% to 1400% and the
improvement on the average bandwidth ranges from 60% to
250%.

Abstract: We study the fundamental naming and counting problems in networks that are anonymous, unknown, and possibly dynamic. Networkdynamicity is modeled by the 1-interval connectivity model [KLO10]. We first prove that on static networks with broadcast counting is impossible to solve without a leader and that naming is impossible to solve even with a leader and even if nodes know n. These impossibilities carry over to dynamicnetworks as well. With a leader we solve counting in linear time. Then we focus on dynamicnetworks with broadcast. We show that if nodes know an upper bound on the maximum degree that will ever appear then they can obtain an upper bound on n. Finally, we replace broadcast with one-to-each, in which a node may send a different message to each of its neighbors. This variation is then proved to be computationally equivalent to a full-knowledge model with unique names.

Abstract: In this work, we study the propagation of influence and computation in dynamicnetworks that are possibly disconnected at every instant. We focus on a synchronous message passing communication model with broadcast and bidirectional links. To allow for bounded end-to-end communication we propose a set of minimal temporal connectivity conditions that bound from the above the time it takes for information to make progress in the network. We show that even in dynamicnetworks that are disconnected at every instant information may spread as fast as in networks that are connected at every instant. Further, we investigate termination criteria when the nodes know some upper bound on each of the temporal connectivity conditions. We exploit our termination criteria to provide efficient protocols (optimal in some cases) that solve the fundamental counting and all-to-all token dissemination (or gossip) problems. Finally, we show that any protocol that is correct in instantaneous connectivity networks can be adapted to work in temporally connected networks.

Abstract: In this work, we study the propagation of influence and computation in dynamic distributed computing systems that are possibly disconnected at every instant. We focus on a synchronous message-passing communication model with broadcast and bidirectional links. Our networkdynamicity assumption is a worst-case dynamicity controlled by an adversary scheduler, which has received much attention recently. We replace the usual (in worst-case dynamicnetworks) assumption that the network is connected at every instant by minimal temporal connectivity conditions. Our conditions only require that another causal influence occurs within every time window of some given length. Based on this basic idea, we define several novel metrics for capturing the speed of information spreading in a dynamicnetwork. We present several results that correlate these metrics. Moreover, we investigate termination criteria in networks in which an upper bound on any of these metrics is known. We exploit our termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental counting and all-to-all token dissemination (or gossip) problems.

Abstract: This chapter aims at presenting certain important aspects of the design of lightweight, event-driven algorithmic solutions for data dissemination in wireless sensor networks that provide support for reliable, efficient and concurrency-intensive operation. We wish to emphasize that efficient solutions at several levels are needed, e.g.~higher level energy efficient routing protools and lower level power management schemes. Furthermore, it is important to combine such different level methods into integrated protocols and approaches. Such solutions must be simple, distributed and local. Two useful algorithmic design principles are randomization (to trade-off efficiency and fault-tolerance) and adaptation (to adjust to high networkdynamics towards improved operation). In particular, we provide a) a brief description of the technical specifications of state-of-the-art sensor devices b) a discussion of possible models used to abstract such networks, emphasizing heterogeneity, c) some representative power management schemes, and d) a presentation of some characteristic protocols for data propagation. Crucial efficiency properties of these schemes and protocols (and their combinations, in some cases) are investigated by both rigorous analysis and performance evaluations through large scale simulations.

Abstract: The problem of communication among mobile nodes is one of the most fundamental problems in ad hoc mobile networks and is at the core of many algorithms, such as for counting the number of nodes, electing a leader, data processing etc. For an exposition of several important problems in ad hoc mobile networks. The work of Chatzigiannakis, Nikoletseas and Spirakis focuses on wireless mobile networks that are subject to highly dynamic structural changes created by mobility, channel fluctuations and device failures. These changes affect topological connectivity, occur with high frequency and may not be predictable in advance. Therefore, the environment where the nodes move (in three-dimensional space with possible obstacles) as well as the motion that the nodes perform are \textit{input} to any distributed algorithm.

Abstract: In this chapter, our focus is on computational network analysis from a theoretical point of view. In particular, we study the \emph{propagation of influence and computation in dynamic distributed computing systems}. We focus on a \emph{synchronous message passing} communication model with bidirectional links. Our networkdynamicity assumption is a \emph{worst-case dynamicity} controlled by an adversary scheduler, which has received much attention recently. We first study the fundamental \emph{naming} and \emph{counting} problems (and some variations) in
networks that are \emph{anonymous}, \emph{unknown}, and possibly dynamic. Networkdynamicity is modeled here by the \emph{1-interval connectivity model}, in which communication is synchronous and a (worst-case) adversary
chooses the edges of every round subject to the condition that each instance is connected. We then replace this quite strong assumption by minimal \emph{temporal connectivity} conditions. These conditions only require that \emph{another causal influence occurs within every time-window of some given length}. Based on this basic idea we define several novel metrics for capturing the speed of information spreading in a dynamicnetwork. We present several results that correlate these metrics. Moreover, we investigate \emph{termination criteria} in networks in which an upper bound on any of these metrics is known. We exploit these termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental \emph{counting} and \emph{all-to-all token dissemination} (or \emph{gossip}) problems. Finally, we propose another model of worst-case temporal connectivity, called \emph{local
communication windows}, that assumes a fixed underlying communication network and restricts the adversary to allow communication between local neighborhoods in every time-window of some fixed length. We prove some basic properties and provide a protocol for counting in this model.

Abstract: This paper addresses the problem of counting the size of a network where (i) processes have the same identifiers (anonymous nodes) and (ii) the et-
work topology constantly changes (dynamicnetwork). Changes are riven by a powerful adversary that can look at internal process states and add and remove edges in order to contrast the convergence of the algorithm to the correct count. The paper proposes two leader-based counting algorithms. Such algorithms are based on a technique that mimics an energy-transfer between network nodes. The first algorithm assumes that the adversary cannot generate either disconnected network graphs or network graphs where nodes have degree greater than D. In such algorithm, the leader can count the size of the network and detect the counting termination in a finite time (i.e., conscious counting algorithm). The second algorithm assumes that the adversary only keeps the network graph connected at any time and we prove that the leader can still converge to a correct count in a finite number of rounds, but it is not conscious when this convergence happens.

Abstract: Counting in general, and estimating the cardinality of (multi-) sets in particular, is highly desirable for a large variety of applications, representing a foundational block for the efficient deployment and access of emerging internet-scale information systems. Examples of such applications range from optimizing query access plans in internet-scale databases, to evaluating the significance (rank/score) of various data items in information retrieval applications. The key constraints that any acceptable solution must satisfy are: (i) efficiency: the number of nodes that need be contacted for counting purposes must be small in order to enjoy small latency and bandwidth requirements; (ii) scalability, seemingly contradicting the efficiency goal: arbitrarily large numbers of nodes nay need to add elements to a (multi-) set, which dictates the need for a highly distributed solution, avoiding server-based scalability, bottleneck, and availability problems; (iii) access and storage load balancing: counting and related overhead chores should be distributed fairly to the nodes of the network; (iv) accuracy: tunable, robust (in the presence of dynamics and failures) and highly accurate cardinality estimation; (v) simplicity and ease of integration: special, solution-specific indexing structures should be avoided. In this paper, first we contribute a highly-distributed, scalable, efficient, and accurate (multi-) set cardinality estimator. Subsequently, we show how to use our solution to build and maintain histograms, which have been a basic building block for query optimization for centralized databases, facilitating their porting into the realm of internet-scale data networks.

Abstract: We consider a synchronous distributed system with n processes that communicate through a dynamicnetwork guaranteeing 1-interval connectivity i.e., the network topology graph might change at each interval while keeping the graph connected at any time. The processes belonging to the distributed system are identified through a set of labels L = {l1 , l2 . . . , lk } (with 1 ≤ k < n). In this challenging system model, the paper addresses the following problem: ”counting the number of processes with the same label”. We provide a distributed algorithm that is able solve the problem based on the notion of energy transfer. Each process owns a fixed energy charge, and tries to discharge itself exchanging, at each round, at most half of its charge with neighbors. The paper also discusses when such counting is possible in the presence of failures. Counting processes with the same label in dynamicnetworks with homonyms is of great importance because it is as difficult as computing generic aggregating functions.

Abstract: In this work we introduce two practical and interesting models of ad-hoc mobile networks: (a) hierarchical ad-hoc networks, comprised of dense subnetworks of mobile users interconnected by a very fast yet limited backbone infrastructure, (b) highly changing ad-hoc networks, where the deployment area changes in a highly dynamic way and is unknown to the protocol. In such networks, we study the problem of basic communication, i.e., sending messages from a sender node to a receiver node. For highly changing networks, we investigate an efficient communication protocol exploiting the coordinated motion of a small part of an ad-hoc mobile network (the ldquorunners supportrdquo) to achieve fast communication. This protocol instead of using a fixed sized support for the whole duration of the protocol, employs a support of some initial (small) size which adapts (given some time which can be made fast enough) to the actual levels of traffic and the (unknown and possibly rapidly changing) network area, by changing its size in order to converge to an optimal size, thus satisfying certain Quality of Service criteria. Using random walks theory, we show that such an adaptive approach is, for this class of ad-hoc mobile networks, significantly more efficient than a simple non-adaptive implementation of the basic ldquorunners supportrdquo idea, introduced in [9,10]. For hierarchical ad-hoc networks, we establish communication by using a ldquorunnersrdquo support in each lower level of the hierarchy (i.e., in each dense subnetwork), while the fast backbone provides interconnections at the upper level (i.e., between the various subnetworks). We analyze the time efficiency of this hierarchical approach. This analysis indicates that the hierarchical implementation of the support approach significantly outperforms a simple implementation of it in hierarchical ad-hoc networks. Finally, we discuss a possible combination of the two approaches above (the hierarchical and the adaptive ones) that can be useful in ad-hoc networks that are both hierarchical and highly changing. Indeed, in such cases the hierarchical nature of these networks further supports the possibility of adaptation.

Abstract: In this work we investigate the problem of communication among mobile hosts, one of the most fundamental problems in ad-hoc mobile networks that is at the core of many algorithms. Our work investigates the extreme case of total absence of any fixed network backbone or centralized administration, instantly forming networks based only on mobile hosts with wireless communication capabilities, where topological connectivity is subject to frequent, unpredictable change.
For such dynamically changing networks we propose a set of protocols which exploit the coordinated (by the protocol) motion of a small part of the network in order to manage network operations. We show that such protocols can be designed to work correctly and efficiently for communication by avoiding message flooding. Our protocols manage to establish communication between any pair of mobile hosts in small, a-priori guaranteed expected time bounds. Our results exploit and further develop some fundamental properties of random walks in finite graph.
Apart from studying the general case, we identify two practical and interesting cases of ad-hoc mobile networks: a) hierarchical ad-hoc networks, b) highly changing ad-hoc networks, for which we propose protocols that efficiently deal with the problem of basic communication.
We have conducted a set of extensive experiments, comprised of thousands of mobile hosts in order to validate the theoretical results and show that our protocols achieve very efficient communication under different scenaria.

Abstract: Wireless sensor networks are a recently introduced category of ad hoc computer networks, which are comprised by nodes of small size and limited computing and energy resources. Such nodes are able of measuring physical properties such as temperature, humidity, etc., wireless communication between each other and in some cases interaction with their surrounding environments (through the use of electromechanical parts).
As these networks have begun to be widely available (in terms of cost and commercial hardware availability), their field of application and philosophy of use is constantly evolving. We have numerous examples of their applications, ranging from monitoring the biodiversity of a specific outdoor area to structural health monitoring of bridges, and also networks ranging from few tens of nodes to even thousands of nodes.
In this PhD thesis we investigated the following basic research lines related to wireless sensor networks:
a) their simulation,
b) the development of data propagation protocols suited to such networks and their evaluation through simulation,
c) the modelling of ``hostile'' circumstances (obstacles) during their operation and evaluation of their impact through simulation,
d) the development of a sensor network management application.
Regarding simulation, we initially placed an emphasis to issues such as the effective simulation of networks of several thousands of nodes, and in that respect we developed a network simulator (simDust), which is extendable through the addition of new data propagation protocols and visualization capabilities. This simulator was used to evaluate the performance of a number of characteristic data propagation protocols for wireless sensor networks. Furthermore, we developed a new protocol (VRTP) and evaluated its performance against other similar protocols. Our studies show that the new protocol, that uses dynamic changes of the transmission range of the network nodes, performs better in certain cases than other related protocols, especially in networks containing obstacles and in the case of non-homogeneous placement of nodes.
Moreover, we emphasized on the addition of ``realistic'' conditions to the simulation of such protocols, that have an adversarial effect on their operation. Our goal was to introduce a model for obstacles that adds little computational overhead to a simulator, and also study the effect of the inclusion of such a model on data propagation protocols that use geographic information (absolute or relative). Such protocols are relatively sensitive to dynamic topology changes and network conditions. Through our experiments, we show that the inclusion of obstacles during simulation can have a significant effect on these protocols.
Finally, regarding applications, we initially proposed an architecture (WebDust/ShareSense), for the management of such networks, that would provide basic capabilities of managing such networks and developing applications above it. Features that set it apart are the capability of managing multiple heterogeneous sensor networks, openess, the use of a peer-to-peer architecture for the interconnection of multiple sensor network. A large part of the proposed architecture was implemented, while the overall architecture was extended to also include additional visualization capabilities.

Abstract: We study here the problem of determining the majority type in an arbitrary connected network, each vertex of which has initially two possible types. The vertices may have a few additional possible states and can interact in pairs only if they share an edge. Any (population) protocol is required to stabilize in the initial majority. We first present and analyze a protocol with 4 states per vertex that always computes the initial majority value, under any fair scheduler. As we prove, this protocol is optimal, in the sense that there is no population protocol that always computes majority with fewer than 4 states per vertex. However this does not rule out the existence of a protocol with 3 states per vertex that is correct with high probability. To this end, we examine a very natural majority protocol with 3 states per vertex, introduced in [Angluin et al. 2008] where its performance has been analyzed for the clique graph. We study the performance of this protocol in arbitrary networks. We prove that, when the two initial states are put uniformly at random on the vertices, this protocol converges to the initial majority with probability higher than the probability of converging to the initial minority. In contrast, we present an infinite family of graphs, on which the protocol can fail whp, even when the difference between the initial majority and the initial minority is n−Θ(lnn). We also present another infinite family of graphs in which the protocol of Angluin et al. takes an expected exponential time to converge. These two negative results build upon a very positive result concerning the robustness of the protocol on the clique. Surprisingly, the resistance of the clique to failure causes the failure in general graphs. Our techniques use new domination and coupling arguments for suitably defined processes whose dynamics capture the antagonism between the states involved.

Abstract: A central problem in distributed computing and telecommunications
is the establishment of common knowledge between two computing
entities. An immediate use of such common knowledge is in the
initiation of a secure communication session between two entities
since the two entities may use this common knowledge in order to
produce a secret key for use with some symmetric cipher.
%
The dynamic establishment of shared information (e.g. secret key)
between two entities is particularly important in networks with no
predetermined structure such as wireless mobile ad-hoc networks.
In such networks, nodes establish and terminate communication
sessions dynamically with other nodes which may have never been
encountered before in order to somehow exchange information which
will enable them to subsequently communicate in a secure manner.
%
In this paper we give and theoretically analyze a protocol that
enables two entities initially possessing a string each to
securely eliminate inconsistent bit positions, obtaining strings
with a larger percentage of similarities. This can help the nodes
establish a shared set of bits and use it as a key with some
shared key encryption scheme.

Abstract: Wireless sensor networks are comprised of a vast number of ultra-small fully autonomous computing, communication and sensing devices, with very restricted energy and computing capabilities, which co-operate to accomplish a large sensing task. Such networks can be very useful in practice in applications that require fine-grain monitoring of physical environment subjected to critical conditions (such as inaccessible terrains or disaster places). Very large numbers of sensor devices can be deployed in areas of interest and use self-organization and collaborative methods to form deeply networked environments. Features including the huge number of sensor devices involved, the severe power, computational and memory limitations, their dense deployment and frequent failures, pose new design and implementation aspects. The efficient and robust realization of such large, highly-dynamic, complex, non-conventional environments is a challenging algorithmic and technological task. In this work we consider certain important aspects of the design, deployment and operation of distributed algorithms for data propagation in wireless sensor networks and discuss some characteristic protocols, along with an evaluation of their performance.

Abstract: An ad hoc mobile network is a collection of mobile hosts, with wireless communication capabilities, forming a temporary network without the aid of any established fixed infrastructure. In such networks, topological connectivity is subject to frequent, unpredictable change. Our work focuses on networks with high rate of such changes to connectivity. For such dynamically changing networks we propose protocols which exploit the co-ordinated (by the protocol) motion of a small part of the network. We show that such protocols can be designed to work correctly and efficiently even in the case of arbitrary (but not malicious) movements of the hosts not affected by the protocol. We also propose a methodology for the analysis of the expected behavior of protocols for such networks, based on the assumption that mobile hosts (those whose motion is not guided by the protocol) conduct concurrent random walks in their motion space. In particular, our work examines the fundamental problem of communication and proposes distributed algorithms for it. We provide rigorous proofs of their correctness, and also give performance analyses by combinatorial tools. Finally, we have evaluated these protocols by experimental means.

Abstract: An ad hoc mobile network is a collection of mobile hosts, with wireless communication capabilities, forming a temporary network without the aid of any established fixed infrastructure. In such networks, topological connectivity is subject to frequent, unpredictable change. Our work focuses on networks with high rate of such changes to connectivity. For such dynamically changing networks we propose protocols which exploit the co-ordinated (by the protocol) motion of a small part of the network. We show that such protocols can be designed to work correctly and efficiently even in the case of arbitrary (but not malicious) movements of the hosts not affected by the protocol. We also propose a methodology for the analysis of the expected behavior of protocols for such networks, based on the assumption that mobile hosts (those whose motion is not guided by the protocol) conduct concurrent random walks in their motion space. In particular, our work examines the fundamental problem of communication and proposes distributed algorithms for it. We provide rigorous proofs of their correctness, and also give performance analyses by combinatorial tools. Finally, we have evaluated these protocols by experimental means.

Abstract: The energy balance property (i.e., all nodes having the same energy throughout the network evolution) contributes significantly (along with energy efficiency) to the maximization of the network lifespan and network connectivity. The problem of achieving energy balanced propagation is well studied in static networks, as it has attracted a lot of research attention.
Recent technological advances have enabled sensor devices to be attached to mobile entities of our every day life (e.g. smart-phones, cars, PDAs etc), thus introducing the formation of highly mobile sensor networks.
Inspired by the aforementioned applications, this work is (to the best of our knowledge) the first studying the energy balance property in wireless networks where the nodes are highly and dynamically mobile. In particular, in this paper we propose a new diverse mobility model which is easily parameterized and we also present a new protocol which tries to adaptively exploit the inherent node mobility in order to achieve energy balance in the network in an efficient way.

Abstract: Many efforts have been done in the last years to model public transport timetables in order to
find optimal routes. The proposed models can be classified into two types: those representing the
timetable as an array, and those representing it as a graph. The array-based models have been
shown to be very effective in terms of query time, while the graph-based models usually answer
queries by computing shortest paths, and hence they are suitable to be used in combination with
speed-up techniques developed for road networks.
In this paper, we focus on the dynamic behavior of graph-based models considering the case
where transportation systems are subject to delays with respect to the given timetable. We
make three contributions: (i) we give a simplified and optimized update routine for the wellknown
time-expanded model along with an engineered query algorithm; (ii) we propose a new
graph-based model tailored for handling dynamic updates; (iii) we assess the effectiveness of
the proposed models and algorithms by an experimental study, which shows that both models
require negligible update time and a query time which is comparable to that required by some
array-based models.

Abstract: An enhanced impairment-aware path computation element (EPCE) for dynamic
transparent optical networks is proposed and experimentally evaluated. The obtained results show
that by using the EPCE, light-path setup times of few seconds are achieved.

Abstract: We investigate the impact of multiple, mobile sinks on
efficient data collection in wireless sensor networks. To
improve performance, our protocol design focuses on minimizing
overlaps of sink trajectories and balancing the service load
among the sinks. To cope with high networkdynamics, placement
irregularities and limited network knowledge we propose three different
protocols: a) a centralized one, that explicitly equalizes spatial coverage;
this protocol assumes strong modeling assumptions, and also serves as a kind
of performance lower bound in uniform networks of low dynamics b)
a distributed protocol based on mutual avoidance of sinks c) a clustering
protocol that distributively groups service areas towards balancing the load per sink.
Our simulation findings demonstrate significant gains in latency, while keeping the success
rate and the energy dissipation at very satisfactory levels even under
high networkdynamics and deployment heterogeneity.

Abstract: We study the problem of fast and energy-efficient
data collection of sensory data using a mobile sink, in wireless sensor networks in which both the sensors and the sink move. Motivated by relevant applications, we focus on dynamic sensory
mobility and heterogeneous sensor placement. Our approach basically suggests to exploit the sensor motion to adaptively propagate information based on local conditions (such as high placement concentrations), so that the sink gradually ”learns”
the network and accordingly optimizes its motion. Compared to relevant solutions in the state of the art (such as the blind random walk, biased walks, and even optimized deterministic sink mobility), our method significantly reduces latency (the improvement ranges from 40% for uniform placements, to 800% for heterogeneous ones), while also improving the success rate and keeping the energy dissipation at very satisfactory levels.

Abstract: A fundamental approach in finding efficiently best routes or optimal itineraries in traffic information
systems is to reduce the search space (part of graph visited) of the most commonly used
shortest path routine (Dijkstra˘s algorithm) on a suitably defined graph. We investigate reduction
of the search space while simultaneously retaining data structures, created during a preprocessing
phase, of size linear (i.e., optimal) to the size of the graph. We show that the search space of
Dijkstra˘s algorithm can be significantly reduced by extracting geometric information from a given
layout of the graph and by encapsulating precomputed shortest-path information in resulted geometric
objects (containers). We present an extensive experimental study comparing the impact of
different types of geometric containers using test data from real-world traffic networks. We also
present new algorithms as well as an empirical study for the dynamic case of this problem, where
edge weights are subject to change and the geometric containers have to be updated and show that
our new methods are two to three times faster than recomputing everything from scratch. Finally,
in an appendix, we discuss the software framework that we developed to realize the implementations
of all of our variants of Dijkstra˘s algorithm. Such a framework is not trivial to achieve as our
goal was to maintain a common code base that is, at the same time, small, efficient, and flexible,
as we wanted to enhance and combine several variants in any possible way.

Abstract: Information retrieval (IR) in peer-to-peer (P2P) networks,
where the corpus is spread across many loosely coupled
peers, has recently gained importance. In contrast to IR
systems on a centralized server or server farm, P2P IR faces
the additional challenge of either being oblivious to global
corpus statistics or having to compute the global measures
from local statistics at the individual peers in an efficient,
distributed manner. One specific measure of interest is the
global document frequency for different terms, which would
be very beneficial as term-specific weights in the scoring and
ranking of merged search results that have been obtained
from different peers.
This paper presents an efficient solution for the problem
of estimating global document frequencies in a large-scale
P2P network with very high dynamics where peers can join
and leave the network on short notice. In particular, the
developed method takes into account the fact that the lo-
cal document collections of autonomous peers may arbitrar-
ily overlap, so that global counting needs to be duplicate-
insensitive. The method is based on hash sketches as a
technique for compact data synopses. Experimental stud-
ies demonstrate the estimator?s accuracy, scalability, and
ability to cope with high dynamics. Moreover, the benefit
for ranking P2P search results is shown by experiments with
real-world Web data and queries.

Abstract: In this work we study the implementation of multicost rout-
ing in a distributed way in wireless mobile ad hoc networks.
In contrast to traditional single-cost routing, where each
path is characterized by a scalar, in multicost routing a
vector of cost parameters is assigned to each network link,
from which the cost vectors of candidate paths are calcu-
lated. These parameters are combined in various optimiza-
tion functions, corresponding to diﬀerent routing algorithms,
for selecting the optimal path. Up until now the performance
of multicost and multi-constrained routing in wireless ad hoc
networks has been evaluated either at a theoretical level or
by assuming that nodes are static and have full knowledge
of the network topology and nodes� state. In the present
paper we assess the performance of multicost routing based
on energy-related parameters in mobile ad hoc networks by
embedding its logic in the Dynamic Source Routing (DSR)
algorithm, which is a well-known fully distributed routing
algorithm. We use simulations to compare the performance
of the multicost-DSR algorithm to that of the original DSR
algorithm and examine their behavior under various node
mobility scenarios. The results conﬁrm that the multicost-
DSR algorithm improves the performance of the network in
comparison to the original DSR algorithm in terms of energy eﬃciency. The multicost-DSR algorithm enhances the
performance of the network not only by reducing energy
consumption overall in the network, but also by spreading
energy consumption more uniformly across the network, pro
longing the network lifetime and reducing the packet drop
probability. Furthermore the delay suﬀered by the packets
reaching their destination for the case of the multicost-DSR
algorithm is shown to be lower than in the case of the orig
inal DSR algorithm.

Abstract: In this work, we study the impact of the dynamic changing of the network link capacities on the stability properties of packet-switched networks. Especially, we consider the Adversarial, Quasi-Static Queuing Theory model, where each link capacity may take on only two possible (integer) values, namely 1 and C>1 under a (w,\~{n})-adversary. We obtain the following results:
• Allowing such dynamic changes to the link capacities of a network with just ten nodes that uses the LIS (Longest-in-System) protocol for contention–resolution results in instability at rates View the MathML source and for large enough values of C.
• The combination of dynamically changing link capacities with compositions of contention–resolution protocols on network queues suffices for similar instability bounds: The composition of LIS with any of SIS (Shortest-in-System), NTS (Nearest-to-Source), and FTG (Furthest-to-Go) protocols is unstable at rates View the MathML source for large enough values of C.
• The instability bound of the network subgraphs that are forbidden for stability is affected by the dynamic changes to the link capacities: we present improved instability bounds for all the directed subgraphs that were known to be forbidden for stability on networks running a certain greedy protocol.

Abstract: Information retrieval (IR) in peer-to-peer (P2P) networks,
where the corpus is spread across many loosely coupled
peers, has recently gained importance. In contrast to IR
systems on a centralized server or server farm, P2P IR faces
the additional challenge of either being oblivious to global
corpus statistics or having to compute the global measures
from local statistics at the individual peers in an efficient,
distributed manner. One specific measure of interest is the
global document frequency for different terms, which would
be very beneficial as term-specific weights in the scoring and
ranking of merged search results that have been obtained
from different peers.
This paper presents an efficient solution for the problem
of estimating global document frequencies in a large-scale
P2P network with very high dynamics where peers can join
and leave the network on short notice. In particular, the
developed method takes into account the fact that the lo-
cal document collections of autonomous peers may arbitrar-
ily overlap, so that global counting needs to be duplicate-
insensitive. The method is based on hash sketches as a
technique for compact data synopses. Experimental stud-
ies demonstrate the estimator?s accuracy, scalability, and
ability to cope with high dynamics. Moreover, the benefit
for ranking P2P search results is shown by experiments with
real-world Web data and queries.

Abstract: In this work we study the combination of multicost
routing and adjustable transmission power in wireless
ad hoc networks, so as to obtain dynamic energy- and
interference-efficient routes to optimize network performance.
In multi-cost routing, a vector of cost parameters is
assigned to each network link, from which the cost vectors
of candidate paths are calculated. Only at the end these
parameters are combined in various optimization functions,
corresponding to different routing algorithms, for selecting
the optimal path. The multi-cost routing problem is a
generalization of the multi-constrained problem, where no
constraints exist, and is also significantly more powerful
than single-cost routing. Since energy is an important
limitation of wireless communications, the cost parameters
considered are the number of hops, the interference caused,
the residual energy and the transmission power of the
nodes on the path; other parameters could also be included,
as desired. We assume that nodes can use power control to
adjust their transmission power to the desired level. The
experiments conducted show that the combination of multicost
routing and adjustable transmission power can lead to
reduced interference and energy consumption, improving
network performance and lifetime.

Abstract: In this book chapter we will consider key establishment protocols for wireless sensor networks.
Several protocols have been proposed in the literature for the establishment of a shared group key for wired networks.
The choice of a protocol depends whether the key is established by one of the participants (and then transported to the other(s)) or agreed among the participants, and on the underlying cryptographic mechanisms (symmetric or asymmetric). Clearly, the design of key establishment protocols for sensor networks must deal with different problems and challenges that do not exist in wired networks. To name a few, wireless links are particularly vulnerable to eavesdropping, and that sensor devices can be captured (and the secrets they contain can be compromised); in many upcoming wireless sensor networks, nodes cannot rely on the presence of an online trusted server (whereas most standardized authentication and key establishment protocols do rely on such a server).
In particular, we will consider five distributed group key establishment protocols. Each of these protocols applies a different algorithmic technique that makes it more suitable for (i) static sensor networks, (ii) sensor networks where nodes enter sleep mode (i.e. dynamic, with low rate of updates on the connectivity graph) and (iii) fully dynamicnetworks where nodes may even be mobile. On the other hand, the common factor for all five protocols is that they can be applied in dynamic groups (where members can be excluded or added) and provide forward and backward secrecy. All these protocols are based on the Diffie-Hellman key exchange algorithm and constitute natural extensions of it in the multiparty case.

Abstract: The promises inherent in users coming together to form data
sharing network communities, bring to the foreground new problems formulated
over such dynamic, ever growing, computing, storage, and networking
infrastructures. A key open challenge is to harness these highly
distributed resources toward the development of an ultra scalable, efficient
search engine. From a technical viewpoint, any acceptable solution
must fully exploit all available resources dictating the removal of any
centralized points of control, which can also readily lead to performance
bottlenecks and reliability/availability problems. Equally importantly,
however, a highly distributed solution can also facilitate pluralism in informing
users about internet content, which is crucial in order to preclude
the formation of information-resource monopolies and the biased visibility
of content from economically-powerful sources. To meet these challenges,
the work described here puts forward MINERVA{\^a}{\"i}ż˝{\"i}ż˝, a novel search
engine architecture, designed for scalability and efficiency. MINERVA{\^a}{\"i}ż˝{\"i}ż˝
encompasses a suite of novel algorithms, including algorithms for creating
data networks of interest, placing data on network nodes, load balancing,
top-k algorithms for retrieving data at query time, and replication algorithms
for expediting top-k query processing. We have implemented the
proposed architecture and we report on our extensive experiments with
real-world, web-crawled, and synthetic data and queries, showcasing the
scalability and efficiency traits of MINERVA{\^a}{\"i}ż˝{\"i}ż˝.

Abstract: Recently, an interest has arisen for network worms that propagate using Domain Name Servers (DNS) in order to discover victim hosts.These worms generate random strings, as possible network domain names, and then query Domain Name Servers in order to discover the corresponding IP addresses. In this paper we present models for the dynamics of the co-evolution of worm agents in the presence of anti-worm agents that move in the network in order to stop worm propagation. The proposed models consider anti-worm agents who know the network and anti-worm agents that do not know it and need to issue queries in order to discover valid IP addresses. We,further, introduce "honeypot'' domain name servers that attempt to lure worms, introducing only a delay and providing no answer.We show that by simply delaying the response to DNS queries issued by the worm has little positive effect on the worms propagation.

Abstract: Recent rapid developments in micro-electro-mechanical systems
(MEMS), wireless communications and digital electronics have already
led to the development of tiny, low-power, low-cost sensor devices.
Such devices integrate sensing, limited data processing and restricted
communication capabilities.
Each sensor device individually might have small utility, however the
effective distributed co-ordination of large numbers of such devices can
lead to the efficient accomplishment of large sensing tasks. Large numbers
of sensors can be deployed in areas of interest (such as inaccessible
terrains or disaster places) and use self-organization and collaborative
methods to form an ad-hoc network.
We note however that the efficient and robust realization of such large,
highly-dynamic, complex, non-conventional networking environments is
a challenging technological and algorithmic task, because of the unique
characteristics and severe limitations of these devices.
This talk will present and discuss several important aspects of the
design, deployment and operation of sensor networks. In particular, we
provide a brief description of the technical specifications of state-of-theart
sensor, a discussion of possible models used to abstract such networks,
a discussion of some key algorithmic design techniques (like randomization,
adaptation and hybrid schemes), a presentation of representative
protocols for sensor networks, for important problems including data
propagation, collision avoidance and energy balance and an evaluation
of crucial performance properties (correctness, efficiency, fault-tolerance)
of these protocols, both with analytic and simulation means.

Abstract: We propose a class of novel energy-efficient multi-cost routing algorithms for wireless mesh networks, and evaluate their performance. In multi-cost routing, a vector of cost parameters is assigned to each network link, from which the cost vectors of candidate paths are calculated using appropriate operators. In the end these parameters are combined in various optimization functions, corresponding to different routing algorithms, for selecting the optimal path. We evaluate the performance of the proposed energy-aware multi-cost routing algorithms under two models. In the network evacuation model, the network starts with a number of packets that have to be transmitted and an amount of energy per node, and the objective is to serve the packets in the smallest number of steps, or serve as many packets as possible before the energy is depleted. In the dynamic one-to-one communication model, new data packets are generated continuously and nodes are capable of recharging their energy periodically, over an infinite time horizon, and we are interested in the maximum achievable steady-state throughput, the packet delay, and the energy consumption. Our results show that energy-aware multi-cost routing increases the lifetime of the network and achieves better overall network performance than other approaches.

Abstract: In this work we study the combination of
multicost routing and adjustable transmission power
in wireless ad-hoc networks, so as to obtain dynamic
energy and interference-efficient routes to optimize network performance. In multi-cost routing, a vector of
cost parameters is assigned to each network link, from
which the cost vectors of candidate paths are calcu-
lated. Only at the end are these parameters combined in
various optimization functions, corresponding to different routing algorithms, for selecting the optimal path.
The multi-cost routing problem is a generalization of
the multi-constrained problem, where no constraints exist, and is also significantly more powerful than single-
cost routing. Since energy is an important limitation of
wireless communications, the cost parameters consid
ered are the number of hops, the interference caused,
the residual energy and the transmission power of the
nodes on the path; other parameters could also be in
cluded, as desired.We assume that nodes can use power
control to adjust their transmission power to the desired
level. The experiments conducted show that the com
bination of multi-cost routing and adjustable transmis sion power can lead to reduced interference and energy
consumption, improving network performance and life-
time.

Abstract: In this work we study the dynamic one-to-one communica-
tion problem in energy- and capacity-constrained wireless ad-hoc net-
works. The performance of such networks is evaluated under random
traﬃc generation and continuous energy recharging at the nodes over an
inﬁnite-time horizon.We are interested in the maximum throughput that
can be sustained by the network with the node queues being ﬁnite and in
the average packet delay for a given throughput. We propose a multicost
energy-aware routing algorithm and compare its performance to that of
minimum-hop routing. The results of our experiments show that gener-
ally the energy-aware algorithm achieves a higher maximum throughput
than the minimum-hop algorithm. More speciﬁcally, when the network
is mainly energy-constrained and for the 2-dimensional topology consid-
ered, the throughput of the proposed energy-aware routing algorithm is
found to be almost twice that of the minimum-hop algorithm.

Abstract: We present and discuss challenges and solutions posed by the design of an
adaptable network infrastructure of tiny artifacts. Such artifacts are characterized by
severe limitations in computational power, communications capacity and energy;
nevertheless they must realize a communication infrastructure able to deliver
services to the end-users in a very dynamic and challenging environment. Namely
we present one unifying scenario for the activities of the FRONTS project
(www.fronts.cti.gr). The aim of the unifying scenario is to show how the results
achieved in the project can be exploited to build such a communication
infrastructure.

Abstract: In this work, we study the fundamental naming and counting problems (and some variations) in networks that are anonymous, unknown, and possibly dynamic. In counting, nodes must determine the size of the network n and in naming they must end up with unique identities. By anonymous we mean that all nodes begin from identical states
apart possibly from a unique leader node and by unknown that nodes
have no a priori knowledge of the network (apart from some minimal
knowledge when necessary) including ignorance of n. Networkdynamicity is modeled by the 1-interval connectivity model [KLO10], in which communication is synchronous and a (worst-case) adversary chooses the edges of every round subject to the condition that each instance is connected. We first focus on static networks with broadcast where we prove that, without a leader, counting is impossible to solve and that naming is impossible to solve even with a leader and even if nodes know n. These impossibilities carry over to dynamicnetworks as well. We also show that a unique leader suffices in order to solve counting in linear time.
Then we focus on dynamicnetworks with broadcast. We conjecture that
dynamicity renders nontrivial computation impossible. In view of this,
we let the nodes know an upper bound on the maximum degree that will
ever appear and show that in this case the nodes can obtain an upper
bound on n. Finally, we replace broadcast with one-to-each, in which a
node may send a different message to each of its neighbors. Interestingly,
this natural variation is proved to be computationally equivalent to a
full-knowledge model, in which unique names exist and the size of the
network is known.

Abstract: Evolutionary dynamics have been traditionally studied in the context of homogeneous populations, mainly described by the Moran process [15]. Recently, this approach has been generalized in [13] by arranging individuals on the nodes of a network (in general, directed). In this setting, the existence of directed arcs enables the simulation of extreme phenomena, where the fixation probability of a randomly placed mutant (i.e. the probability that the offsprings of the mutant eventually spread over the whole population) is arbitrarily small or large. On the other hand, undirected networks (i.e. undirected graphs) seem to have a smoother behavior, and thus it is more challenging to find suppressors/amplifiers of selection, that is, graphs with smaller/greater fixation probability than the complete graph (i.e. the homogeneous population). In this paper we focus on undirected graphs. We present the first class of undirected graphs which act as suppressors of selection, by achieving a fixation probability that is at most one half of that of the complete graph, as the number of vertices increases. Moreover, we provide some generic upper and lower bounds for the fixation
probability of general undirected graphs. As our main contribution, we introduce the natural alternative of the model proposed in [13]. In our new evolutionary model, all individuals interact simultaneously and the result is a compromise between aggressive and non-aggressive individuals. That is, the behavior of the individuals in our new model and in the model of [13] can be interpreted as an “aggregation” vs. an “all-or-nothing” strategy, respectively. We prove that our new model of mutual influences admits a potential function, which guarantees the convergence of the system for any graph topology and any initial fitness vector of the individuals. Furthermore, we prove fast convergence to the stable state for the case of the complete graph, as well as we provide almost tight bounds on the limit fitness of the individuals. Apart from being important on its own, this new evolutionary model appears to be useful also in the abstract modeling of control mechanisms over invading populations in networks. We demonstrate this by introducing and analyzing two alternative control approaches, for which we bound the time needed to stabilize to the “healthy” state of the system.

Abstract: The concept of trust plays an important role in the operation and public acceptance of today's computing environment. Although it is a difficult concept to formalize and handle, many efforts have been made towards a clear definition of trust and the development of systematic ways for trust management. Our central viewpoint is that trust cannot be defined, anymore, as consisting of a static set of rules that define systems properties that hold eternally due to the highly dynamic nature of today's computing systems (e.g. wireless networks, ad-hoc networks, virtual communities and digital territories etc.). Our approach is an effort to define trust in terms of properties that hold with some limiting probability as the the system grows and try to establish conditions that ensure that ??good?? properties hold almost certainly. Based on this viewpoint, in this paper we provide a new framework for defining trust through formally definable properties that hold, almost certainly, in the limit in randomly growing combinatorial structures that model ??boundless?? computing systems (e.g. ad-hoc networks), drawing on results that establish the threshold behavior of predicates written in the first and second order logic. We will also see that, interestingly, some trust models have properties that do not have limiting probabilities. This fact can be used to demonstrate that as certain trust networks grow indefinitely, their trust properties are not certain to be present

Abstract: An ad-hoc mobile network is a collection of mobile hosts, with
wireless communication capabilities, forming a temporary network
without the aid of any established fixed infrastructure.
In such networks, topological connectivity is subject to frequent,
unpredictable change. Our work focuses on networks with high
rate of such changes to connectivity. For such dynamic changing
networks we propose protocols which exploit the co-ordinated
(by the protocol) motion of a small part of the network.
We show that such protocols can be designed to work
correctly and efficiently even in the case of arbitrary (but not
malicious) movements of the hosts not affected by the protocol.
We also propose a methodology for the analysis of the expected
behaviour of protocols for such networks, based on the assumption that mobile hosts (whose motion is not guided by
the protocol) conduct concurrent random walks in their
motion space.
Our work examines some fundamental problems such as pair-wise
communication, election of a leader and counting, and proposes
distributed algorithms for each of them. We provide their
proofs of correctness, and also give rigorous analysis by
combinatorial tools and also via experiments.

Abstract: This article studies the transmission control
protocol (TCP) synchronization effect in optical burst
switched networks.Synchronization of TCP ﬂows appears
when optical bursts with segments from different ﬂows inside
are dropped in the network causing ﬂow congestion windows decreasing simultaneously. In this article,this imminent
effect is studied with different assembly schemes and network scenarios.Different metrics are applied to quantitatively assess synchronization with classical assembly
schemes.A new burst assembly scheme is proposed that
statically or dynamically allocates ﬂows to multiple assembly queues to control ﬂow aggregation within the assembly
cycle.The effectiveness of the scheme has been evaluated,
showing a good improvement in optical link utilization

Abstract: In this work, we study the impact of dynamically changing
link capacities on the delay bounds of LIS (Longest-In-
System) and SIS (Shortest-In-System) protocols on specific
networks (that can be modelled as Directed Acyclic Graphs-
DAGs) and stability bounds of greedy contention-resolution
protocols running on arbitrary networks under the Adversarial
Queueing Theory. Especially, we consider the model
of dynamic capacities, where each link capacity may take
on integer values from [1, C] withC > 1, under a (w, \~{n})-
adversary.

Abstract: In this work, we study the impact of dynamically changing link capacities on the delay bounds of LIS (Longest-In-System) and SIS (Shortest-In-System) protocols on specific networks (that can be modelled as Directed Acyclic Graphs (DAGs)) and stability bounds of greedy contention–resolution protocols running on arbitrary networks under the Adversarial Queueing Theory. Especially, we consider the model of dynamic capacities, where each link capacity may take on integer values from [1,C] with C>1, under a (w,\~{n})-adversary. We show that the packet delay on DAGs for LIS is upper bounded by O(iw\~{n}C) and lower bounded by {\`U}(iw\~{n}C) where i is the level of a node in a DAG (the length of the longest path leading to node v when nodes are ordered by the topological order induced by the graph). In a similar way, we show that the performance of SIS on DAGs is lower bounded by {\`U}(iw\~{n}C), while the existence of a polynomial upper bound for packet delay on DAGs when SIS is used for contention–resolution remains an open problem. We prove that every queueing network running a greedy contention–resolution protocol is stable for a rate not exceeding a particular stability threshold, depending on C and the length of the longest path in the network.

Abstract: We propose, implement and evaluate new energy conservation schemes for efficient data propagation in wireless sensor networks. Our protocols are adaptive, i.e. locally monitor the network conditions and accordingly adjust towards optimal operation choices. This dynamic feature is particularly beneficial in heterogeneous settings and in cases of redeployment of sensor devices in the network area. We implement our protocols and evaluate their performance through a detailed simulation study using our extended version of ns-2. In particular we combine our schemes with known communication paradigms. The simulation findings demonstrate significant gains and good trade-offs in terms of delivery success, delay and energy dissipation.

Abstract: In this paper we study the problem of assigning transmission ranges to the nodes of a multihop
packet radio network so as to minimize the total power consumed under the constraint
that adequate power is provided to the nodes to ensure that the network is strongly connected
(i.e., each node can communicate along some path in the network to every other node). Such
assignment of transmission ranges is called complete. We also consider the problem of achieving
strongly connected bounded diameter networks.
For the case of n + 1 colinear points at unit distance apart (the unit chain) we give a tight
asymptotic bound for the minimum cost of a range assignment of diameter h when h is a xed
constant and when h>(1 + ) log n, for some constant > 0. When the distances between the
colinear points are arbitrary, we give an O(n4) time dynamic programming algorithm for nding
a minimum cost complete range assignment.
For points in three dimensions we show that the problem of deciding whether a complete
range assignment of a given cost exists, is NP-hard. For the same problem we give an O(n2)
time approximation algorithm which provides a complete range assignment with cost within a
factor of two of the minimum. The complexity of this problem in two dimensions remains open,
while the approximation algorithm works in this case as well.

Abstract: Future Grid Networks should be able to provide Quality of Service (QoS) guarantees
to their users. In this work we examine the way Grid resources should be
configured so as to provide deterministic delay guarantees to Guaranteed Service
(GS) users and fairness to Best Effort (BE) users. The resources are partitioned
in groups that serve GS users only, or BE users only, or both types of users with
different priorities. Furthermore, the GS users are registered to the resources
either statically or dynamically, while both single and multi-Cpu resources are
examined. Finally the proposed resource configurations for providing QoS are
implemented in theGridSim environment and a number simulations are executed.
Our results indicate that the allocation of resources to both types of users, with
different priorities, results in fewer deadlines missed and better resources utilization.
Finally benefits can be derived from the dynamic registration of GS users
to the resources

Abstract: In this paper we present an efficient general simulation strategy for
computations designed for fully operational BSP machines of n ideal processors,
on n-processor dynamic-fault-prone BSP machines. The fault occurrences are failstop
and fully dynamic, i.e., they are allowed to happen on-line at any point of the
computation, subject to the constraint that the total number of faulty processors
may never exceed a known fraction. The computational paradigm can be exploited
for robust computations over virtual parallel settings with a volatile underlying
infrastructure, such as a NETWORK OF WORKSTATIONS (where workstations may be
taken out of the virtual parallel machine by their owner).
Our simulation strategy is Las Vegas (i.e., it may never fail, due to backtracking
operations to robustly stored instances of the computation, in case of locally
unrecoverable situations). It adopts an adaptive balancing scheme of the workload
among the currently live processors of the BSP machine.
Our strategy is efficient in the sense that, compared with an optimal off-line
adversarial computation under the same sequence of fault occurrences, it achieves an O
ˇ
.log n ˘ log log n/2˘
multiplicative factor times the optimal work (namely, this
measure is in the sense of the “competitive ratio” of on-line analysis). In addition,
our scheme is modular, integrated, and considers many implementation points.
We comment that, to our knowledge, no previous work on robust parallel computations
has considered fully dynamic faults in the BSP model, or in general distributed
memory systems. Furthermore, this is the first time an efficient Las Vegas
simulation in this area is achieved.

Abstract: We consider information aggregation as a method for reducing the information exchanged in a Grid network and used by the resource manager in order to make scheduling decisions. In this way, information is summarized across nodes and sensitive or detailed information can be kept private, while resources are still publicly available for use. We present a general framework for information aggregation, trying to identify issues that relate to aggregation in Grids. In this context, we describe a number of techniques, including single point and intra-domain aggregation, define appropriate grid-specific domination relations and operators for aggregating static and dynamic resource information, and discuss resource selection optimization functions. The quality of an aggregation scheme is measured both by its effects on the efficiency of the scheduler˘s decisions and also by the reduction it brings on the amount of resource information recorded, a tradeoff that we examine in detail. Simulation experiments demonstrate that the proposed schemes achieve significant information reduction, either in the amount of information exchanged, or in the frequency of the updates, while at the same time maintaining most of the value of the original information as expressed by a stretch factor metric we introduce.

Abstract: In this Phd thesis,, we try to use formal logic and threshold phenomena that asymptotically emerge with certainty in order to build new trust models and to evaluate the existing one. The departure point of our work is that dynamic, global computing systems are not amenable to a static viewpoint of the trust concept, no matter how this concept is formalized. We believe that trust should be a statistical, asymptotic concept to be studied in the limit as the system's components grow according to some growth rate. Thus, our main goal is to define trust as an emerging system property that ``appears'' or "disappears" when a set of properties hold, asymptotically with probability$ 0$ or $1$ correspondingly . Here we try to combine first and second order logic in order to analyze the trust measures of specific network models. Moreover we can use formal logic in order to determine whether generic reliability trust models provide a method for deriving trust between peers/entities as the network's components grow. Our approach can be used in a wide range of applications, such as monitoring the behavior of peers, providing a measure of trust between them, assessing the level of reliability of peers in a network. Wireless sensor networks are comprised of a vast number of ultra-small autonomous computing, communication and sensing devices, with restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Sensor networks can be very useful in practice. Such systems should at least guarantee the confidentiality and integrity of the information reported to the controlling authorities regarding the realization of environmental events. Therefore, key establishment is critical for the protection in wireless sensor networks and the prevention of adversaries from attacking the network. Finally in this dissertation we also propose three distributed group key establishment protocols suitable for such energy constrained networks. This dissertation is composed of two parts. Part I develops the theory of the first and second order logic of graphs - their definition, and the analysis of their properties that are expressible in the {\em first order language} of graphs. In part II we introduce some new distributed group key establishment protocols suitable for sensor networks. Several key establishment schemes are derived and their performance is demonstrated.

Abstract: Geographic routing is becoming the protocol of choice for
many sensor network applications. The current state of the art is unsatisfactory:
some algorithms are very efficient, however they require a
preliminary planarization of the communication graph. Planarization induces
overhead and is thus not realistic for some scenarios such as the
case of highly dynamicnetwork topologies. On the other hand, georouting
algorithms which do not rely on planarization have fairly low success
rates and fail to route messages around all but the simplest obstacles.
To overcome these limitations, we propose the GRIC geographic routing
algorithm. It has absolutely no topology maintenance overhead, almost
100% delivery rates (when no obstacles are added), bypasses large convex
obstacles, finds short paths to the destination, resists link failure
and is fairly simple to implement. The case of hard concave obstacles
is also studied; such obstacles are hard instances for which performance
diminishes.

Abstract: In the near future, it is reasonable to expect that new types of systems will appear, of massive scale that will operating in a constantly changing networked environment. We expect that most such systems will have the form of a large society of tiny networked artefacts. Angluin et al. introduced the notion of "Probabilistic Population Protocols'' (PPP) in order to model the behavior of such systems where extremely limited agents are represented as finite state machines that interact in pairs under the control of an adversary scheduler. We propose to study the dynamics of Probabilistic Population Protocols, via the differential equations approach. We provide a very general model that allows to examine the continuous dynamics of population protocols and we show that it includes the model of Angluin et. al., under certain conditions, with respect to the continuous dynamics of the two models. Our main proposal here is to exploit the powerful tools of continuous nonlinear dynamics in order to examine the behavior of such systems. We also provide a sufficient condition for stability.

Abstract: In this work, we study the impact of the dynamic changing of the network link capacities on the stability properties of packet-switched networks. Especially, we consider the Adversarial, Quasi-Static Queuing Theory model, where each link capacity may take on only two possible (integer) values, namely 1 and C>1 under a (w,\~{n})-adversary. We obtain the following results:
• Allowing such dynamic changes to the link capacities of a network with just ten nodes that uses the LIS (Longest-in-System) protocol for contention–resolution results in instability at rates View the MathML source and for large enough values of C.
• The combination of dynamically changing link capacities with compositions of contention–resolution protocols on network queues suffices for similar instability bounds: The composition of LIS with any of SIS (Shortest-in-System), NTS (Nearest-to-Source), and FTG (Furthest-to-Go) protocols is unstable at rates View the MathML source for large enough values of C.
• The instability bound of the network subgraphs that are forbidden for stability is affected by the dynamic changes to the link capacities: we present improved instability bounds for all the directed subgraphs that were known to be forbidden for stability on networks running a certain greedy protocol.

Abstract: We study here dynamic antagonism in a fixed network, represented as a graph $G$ of $n$ vertices. In particular, we consider the case of $k \leq n$ particles walking randomly independently around the network. Each particle belongs to exactly one of two antagonistic species, none of which can give birth to children. When two particles meet, they are engaged in a (sometimes mortal) local fight. The outcome of the fight depends on the species to which the particles belong. Our problem is \emph{to predict} (i.e. to compute) the eventual chances of species survival. We prove here that this can indeed be done in \emph{expected polynomial time on the size of the network}, provided that the network is \emph{undirected}.

Abstract: Peer-to-peer sharing systems are becoming
increasingly popular and an exciting new class of
innovative, internet-based data management
systems. In these systems, users contribute their
own resources (processing units and storage
devices) and content (i.e., documents) to the P2P
community. We focus on the management of
content and resources in such systems. Our goal
is to harness all available resources in the P2P
network so that the users can access all available
content efficiently. Efficiency is taken both from
(i) the point of view of the system, in that we
strive to ensure fair load distribution among all
peer nodes, and (ii) from the point of view of the
users, in that we strive to ensure low user-request
response times.
We propose a novel architecture for this new
class of applications, which differs drastically
from what is either found currently in existing
products or proposed in academia. We contribute
and study novel solutions that achieve our goals,
while at the same time addressing the formidable
challenges due to the autonomy of peers, their
heterogeneous processing and storage capacities,
their different content contributions, the huge
system scale, and the highly dynamic system
environment.

Abstract: The peer-to-peer computing paradigm is an intriguing alternative to Google-style search
engines for querying and ranking Web content. In a network with many thousands or
millions of peers the storage and access load requirements per peer are much lighter
than for a centralized Google-like server farm; thus more powerful techniques from information
retrieval, statistical learning, computational linguistics, and ontological reasoning
can be employed on each peer˘s local search engine for boosting the quality
of search results. In addition, peers can dynamically collaborate on advanced and particularly
difficult queries. Moroever, a peer-to-peer setting is ideally suited to capture
local user behavior, like query logs and click streams, and disseminate and aggregate
this information in the network, at the discretion of the corresponding user, in order to
incorporate richer cognitive models.
This paper gives an overview of ongoing work in the EU Integrated Project DELIS
that aims to develop foundations for a peer-to-peer search engine with Google-or-better
scale, functionality, and quality, which will operate in a completely decentralized and
self-organizing manner. The paper presents the architecture of such a system and the
Minerva prototype testbed, and it discusses various core pieces of the approach: efficient
execution of top-k ranking queries, strategies for query routing when a search request
needs to be forwarded to other peers, maintaining a self-organizing semantic overlay
network, and exploiting and coping with user and community behavior.

Abstract: In this work we present a new simulation toolkit that we call TRAILS (Toolkit for Realism and Adaptivity In Large-scale Simulations), which extends the \NS simulator by adding several important functionalities and optimizing certain
critical simulator operations. The added features focus on providing the user with the necessary tools to better study wireless networks of high dynamics; in particular, to implement advanced mobility patterns, obstacle presence and disaster scenarios, and failures injection. These scenarios and patterns can dynamically change throughout the execution of the simulation based on network related parameters. Moreover, we define a set of utilities that can facilitate the use of \NS providing advanced statistics and easy-to-use logging mechanisms. This functionality is implemented in a simple and flexible architecture, that follows design patterns, object oriented and generic programming principles, maintaining a proper balance between reusability, extendability and ease of use. We evaluate the performance of TRAILS and show that it offers significant speed-ups (at least 4 times faster) regarding the execution time of \NS in certain important, common wireless settings. Our results also show that this is achieved with minimum overhead in terms of memory usage.

Abstract: We examine the problem of transmitting in minimum time a given amount of data between a
source and a destination in a network with finite channel capacities and nonzero propagation delays. In
the absence of delays, the problem has been shown to be solvable in polynomial time. In this paper, we
show that the general problem is NP-complete. In addition, we examine transmissions along a single path,
called the quickest path, and present algorithms for general and special classes of networks that improve
upon previous approaches. The first dynamic algorithm for the quickest path problem is also
given

Abstract: Today we are experiencing a major reconsideration of the computing
paradigm, as witnessed by the abundance and increasing frequency
of use of terms such as {\em ambient intelligence}, {\em ubiquitous computing}, {\em disappearing computer}, {\em grid
computer}, {\em global computing} and {\em mobile ad-hoc
networks}. Systems that can be described with such terms are of a
dynamic, with no clear physical boundary, nature and it seems that
it is impossible (or, at least, difficult) to define sharply a
number of important properties holding with certainty as well as
holding throughout the whole lifetime of the system.
%
One such system property, which is important for the viability of
a system, is {\em trust}. Our departure point is the assumption
that it seems very difficult to define static system properties
related to trust and expect that they hold eternally in the
rapidly changing systems falling under the new computing paradigm.
One should, rather, attempt to define trust in terms of properties
that hold with some limiting probability as the the system grows
and try to establish conditions that ensure that ``good''
properties hold {\em almost certainly}. Based on this viewpoint,
in this paper we provide a new framework for defining trust
through formally definable properties that hold, almost certainly,
in the limit in randomly growing combinatorial structures that
model ``shapeless'' computing systems (e.g. ad-hoc networks),
drawing on results that establish the threshold behavior of
predicates written in the first and second order logic.