Abstract: We demonstrate a highly stable, all-polarization-maintaining
fiber semiconductor ring laser source. It uses a semiconductor optical
amplifier (SOA) to provide both gain and gain modulation from an external
5-GHz optical pulse train. The laser source generates 3.5-ps pulses
up to a 50-GHz repetition rate with negligible amplitude pattern.

Abstract: The efficient use of resources and the lossless transfer of data bursts in future optical
networks requires the accurate knowledge of the available bandwidth for each network
link. Such information is important in monitoring congestions and can be used by
appropriate load balancing and congestion avoidance mechanisms. In this paper we
propose a mechanism for monitoring and subsequently managing bandwidth resources,
using the Simple NetworkManagement Protocol (SNMP). In the proposed mechanism,
link bandwidth availability is not a scalar parameter, but a function of time that records
the future utilization of the link. For every output port, each agent-node maintains a
simple data structure in the form of a table that records the utilization profile of that
outgoing link. With the addition of new objects in the Management Information Base
(MIB) of each agent-node and proper synchronization, SNMP can be used to update
and retrieve the reservations made on the links in order to obtain an instant picture of
the network traffic situation.

Abstract: Implementation of a commercial application to a
grid infrastructure introduces new challenges in managing the
quality-of-service (QoS) requirements, most stem from the fact
that negotiation on QoS between the user and the service provider
should strictly be satisfied. An interesting commercial application
with a wide impact on a variety of fields, which can benefit from
the computational grid technologies, is three–dimensional (3-D)
rendering. In order to implement, however, 3-D rendering to a
grid infrastructure, we should develop appropriate scheduling
and resource allocation mechanisms so that the negotiated (QoS)
requirements are met. Efficient scheduling schemes require
modeling and prediction of rendering workload. In this paper
workload prediction is addressed based on a combined fuzzy
classification and neural network model. Initially, appropriate
descriptors are extracted to represent the synthetic world. The
descriptors are obtained by parsing RIB formatted files, which
provides a general structure for describing computer-generated
images. Fuzzy classification is used for organizing rendering
descriptor so that a reliable representation is accomplished which
increases the prediction accuracy. Neural network performs
workload prediction by modeling the nonlinear input-output
relationship between rendering descriptors and the respective
computational complexity. To increase prediction accuracy, a
constructive algorithm is adopted in this paper to train the neural
network so that network weights and size are simultaneously
estimated. Then, a grid scheduler scheme is proposed to estimate
the queuing order that the tasks should be executed and the
most appopriate processor assignment so that the demanded
QoS are satisfied as much as possible. A fair scheduling policy is
considered as the most appropriate. Experimental results on a real
grid infrastructure are presented to illustrate the efficiency of the
proposed workload prediction — scheduling algorithm compared
to other approaches presented in the literature.

Abstract: In this work we propose a new unified PON
RAN architecture for LTE mobile backhaul networks,
employing ringbased WDM PONs. The proposed
architecture supports dynamic setup of virtual circ
uits for interbase station communication, over a dedicated
λ
LAN channel. The reservation mechanism is arbitrated by the OLT, which also monitors the traffic imbalances of downstream channels. The proposed architecture also supports load balancing, by dynamically reallocatin
g and sharing the capacity of the downstream wavelengths.

Abstract: Large-scale sensor networks, monitoring an environment at close range with high spatial and temporal resolutions are expected to play an important role in various applications, e.g., assessing the ``health'' of machines; environmental, medical, food-safety, and habitat monitoring; inventory control, building automation, etc. Ensuring the security of these complex and yet resource-constrained systems has emerged as one of the most pressing challenges for researchers. In this paper (i) we present the major threats and some characteristic countermeasures, (ii) we propose a way to classify existing systems for intrusion detection in wireless sensor networks and (iii) we present a new approach for decentralized energy efficient intrusion detection that can be used to improve security from both external and internal adversaries.

Abstract: Core networks of the future will have a
translucent and eventually transparent optical
structure. Ultra-high-speed end-to-end connectiv-
ity with high quality of service and high reliability
will be realized through the exploitation of opti-
mized protocols and lightpath routing algorithms.
These algorithms will complement a flexible con-
trol and management plane integrated in the
proposed solution. Physical layer impairments
and optical performance are monitored and
incorporated in impairment-aware lightpath rout-
ing algorithms. These algorithms will be integrat-
ed into a novel dynamic network planning tool
that will consider dynamic traffic characteristics,
a reconfigurable optical layer, and varying physi-
cal impairment and component characteristics.
The network planning tool along with extended
control planes will make it possible to realize the
vision of optical transparency. This article pre-
sents a novel framework that addresses dynamic
cross-layer network planning and optimization
while considering the development of a future
transport network infrastructure.

Abstract: We here present the Forward Planning Situated Protocol (FPSP), for scalable, energy efficient and fault tolerant data propagation in situated wireless sensor networks. To deal with the increased complexity of such deeply networked sensor systems, instead of emphasizing on a particular aspect of the services provided, i.e. either for low-energy periodic, or low-latency event-driven, or high-success query-based sensing, FPSP uses two novel mechanisms that allow the network operator to adjust the performance of the protocol in terms of energy, latency and success rate on a per-task basis. We emphasize on distributedness, direct or indirect interactions among relatively simple agents, flexibility and robustness.
The protocol operates by employing a series of plan & forward phases through which devices self-organize into forwarding groups that propagate data over discovered paths. FPSP performs a limited number of long range, high power data transmissions to collect information regarding the neighboring devices. The acquired information, allows to plan a (parameterizable long by {\"e}) sequence of short range, low power transmissions between nearby particles, based on certain optimization criteria. All particles that decide to respond (based on local criteria) to these long range transmissions enter the forwarding phase during which information is propagated via the acquired plan. Clearly, the duration of the forwarding phases is characterized by the parameter {\"e}, the transmission medium and the processing speed of the devices. In fact the parameter {\"e} provides a mechanism to adjust the protocol performance in terms of the latency--energy trade-off. By reducing {\"e} the latency is reduced at the cost of spending extra energy, while by increasing {\"e}, the energy dissipation is reduced but the latency is increased.
To control the success rate--energy trade-off, particles react locally on environment and context changes by using a set of rules that are based on response thresholds that relate individual-level plasticity with network-level resiliency, motivated by the nature-inspired method for dividing labor, a metaphor of social insect behavior for solving problems [1]. Each particle has an individual response threshold {\`E} that is related to the "local" density (as observed by the particle, [2]); particles engage in propagation of events when the level of the task-associated stimuli exceeds their thresholds. Let s be the intensity of a stimulus associated with a particular sensing task, set by the human authorities. We adopt the response function T_{{\`e}}(s) = s^{n}over s^{n} + {\`e}^{n}, the probability of performing the task as a function of s, where n > 1 determines the steepness of the threshold. Thus, when {\`e} is small (i.e. the network is sparse) then the response probability increases; when s increases (i.e. for critical sensing tasks) the response probability increases as well.
This role-based approach where a selective number of devices do the high cost planning and the rest of the network operates in a low cost state leads to systems that have increased energy efficiency and high fault-tolerance since these long range planning phases allow to bypass obstacles (where no sensors are available) or faulty sensors (that have been disabled due to power failure or other natural events).

Abstract: eVoting is considered to be one of the most challenging domains of modern eGovernment and one of the main vehicles for increasing eParticipation among citizens. One of the main obstacles for its wide adoptionis the reluctance of citizens to participate in electronic voting procedures. This reluctance can be partially attributed to the low penetration of technology among citizens. However, the main reason behind this reluctance is the lack of trust which stems from the belief of citizens that systems implementing an eVoting process will violate their privacy. The departure point of this approach is that the emergence of such a belief can be considerably facilitated by designing and building systems in a way that evidence about the system’s properties is produced during the design process. In this way, the designers can demonstrate the respect in privacy using this evidence that can be understood and checked by the specialist and the informed layman. These tools and models should provide sufficient evidence that the target system handles privacy concerns and requirements that can remove enough of the fears towards eVoting. This paper presents the efforts of the authors‘ organization, the Computer Technology Institute and Press “Diophantus” (CTI), towards the design and implementation of an eVoting system, called PNYKA, with demonstrable security properties. This system was based on a trust-centered engineering approach for building general security critical systems. The authors‘ approach is pragmatic rather than theoretical in that it sidesteps the controversy that besets the nature of trust in information systems and starts with a working definition of trust as people’s positive attitude towards a system that transparently and demonstrably performs its operations, respecting their privacy. The authors also discuss the social side of eVoting, i.e. how one can help boost its acceptance by large social groups targeting the whole population of the country. The authors view eVoting as an innovation that must be diffused to a population and then employ a theoretical model that studies diffusion of innovation in social network, delineating structural properties of the network that help diffuse the innovation fast. Furthermore, the authors explain how CTI’s current situation empowers CTI to realize its vision to implement a privacy preserving, discussion and public consultation forum in Greece. This forum will link, together, all Greek educational institutes in order to provide a privacy preserving discussion and opinion gathering tool useful in decision making within the Greek educational system.

Abstract: We give an efficient local search
algorithm that computes a good vertex coloring of a graph $G$. In
order to better illustrate this local search method, we view local
moves as selfish moves in a suitably defined game. In particular,
given a graph $G=(V,E)$ of $n$ vertices and $m$ edges, we define
the \emph{graph coloring game} $\Gamma(G)$ as a strategic game
where the set of players is the set of vertices and the players
share the same action set, which is a set of $n$ colors. The
payoff that a vertex $v$ receives, given the actions chosen by all
vertices, equals the total number of vertices that have chosen the
same color as $v$, unless a neighbor of $v$ has also chosen the
same color, in which case the payoff of $v$ is 0. We show:
\begin{itemize}
\item The game $\Gamma(G)$ has always pure Nash equilibria. Each
pure equilibrium is a proper coloring of $G$. Furthermore, there
exists a pure equilibrium that corresponds to an optimum coloring.
\item We give a polynomial time algorithm $\mathcal{A}$ which
computes a pure Nash equilibrium of $\Gamma(G)$. \item The total
number, $k$, of colors used in \emph{any} pure Nash equilibrium
(and thus achieved by $\mathcal{A}$) is $k\leq\min\{\Delta_2+1,
\frac{n+\omega}{2}, \frac{1+\sqrt{1+8m}}{2}, n-\alpha+1\}$, where
$\omega, \alpha$ are the clique number and the independence number
of $G$ and $\Delta_2$ is the maximum degree that a vertex can have
subject to the condition that it is adjacent to at least one
vertex of equal or greater degree. ($\Delta_2$ is no more than the
maximum degree $\Delta$ of $G$.) \item Thus, in fact, we propose
here a \emph{new}, \emph{efficient} coloring method that achieves
a number of colors \emph{satisfying (together) the known general
upper bounds on the chromatic number $\chi$}. Our method is also
an alternative general way of \emph{proving},
\emph{constructively}, all these bounds. \item Finally, we show
how to strengthen our method (staying in polynomial time) so that
it avoids ``bad'' pure Nash equilibria (i.e. those admitting a
number of colors $k$ far away from $\chi$). In particular, we show
that our enhanced method colors \emph{optimally} dense random
$q$-partite graphs (of fixed $q$) with high probability.
\end{itemize}

Abstract: Consider a network vulnerable to viral infection. The system security software can guarantee
safety only to a limited part of the network. Such limitations result from economy costs or processing
costs. The problem raised is to which part of the network the security software should
be installed, so that to secure as much as possible the network. We model this practical network
scenario as a non-cooperative multi-player game on a graph, with two kinds of players, a set
of attackers and a protector player, representing the viruses and the system security software,
respectively. Each attacker player chooses a node of the graph (or a set of them, via a probability
distribution) to infect. The protector player chooses independently, in a basic case of the
problem, a simple path or an edge of the graph (or a set of them, via a probability distribution)
and cleans this part of the network from attackers. Each attacker wishes to maximize the probability
of escaping its cleaning by the protector. In contrast, the protector aims at maximizing
the expected number of cleaned attackers. We call the two games obtained from the two basic
cases considered, as the Path and the Edge model, respectively. For these two games, we are
interested in the associated Nash equilibria, where no network entity can unilaterally improve
its local objective. We obtain the following results:
• The problem of existence of a pure Nash equilibrium is NP-complete for the Path model.
This opposed to that, no instance of the Edge model possesses a pure Nash equilibrium,
proved in [7].
• In [7] a characterization of mixed Nash equilibria for the Edge model is provided. However,
that characterization only implies an exponential time algorithm for the general case.
Here, combining it with clever exploration of properties of various practical families of
graphs, we compute, in polynomial time, mixed Nash equilibria on corresponding graph
instances. These graph families include, regular graphs, graphs that can be decomposed, in
polynomially time, into vertex disjoint r-regular subgraphs, graphs with perfect matchings
and trees.
• We utilize the notion of social cost [6] for measuring system performance on such scenario;
here is defined to be the utility of the protector. We prove that the corresponding Price of
Anarchy in any mixed Nash equilibria of the game is upper and lower bounded by a linear
function of the number of vertices of the graph.

Abstract: In this paper we present a CPU scavenging
architecture suitable for desktop resources,
and we study its appropriateness in exploiting the
PC Laboratory resources of the Greek School
Network and their integration to the existing
HellasGrid national infrastructure. School laboratories
form an extensive network equipped with
computational systems and fast Internet connections.
As this infrastructure is utilized at most 8 h per day and 5 days per week, it could be
made available during its remaining idle time for
computational purposes through the use of Grid
technology. The structure and organization of the
school laboratories and backbone network enables
the CPU scavenging service, as an independent
and additional service, which will not violate
the operational rules and policies of the school
network, while it will add additional resources
to the current HellasGrid infrastructure with low
adaptation cost.

Abstract: Load balancing/sharing is a policy which exploits the communication facility between the servers of a distributed system, by using the exchanging of status information and jobs between any two servers of the system, in order to improve the performance of the whole system. In this work, we propose a new adaptive distributed hierarchical scheme, the Virtual Tree Algorithm (VTA), which creates a virtual binary tree structure over the actual network topology. It uses the Difference-Initiated (DI) technique ([11, 1]) for load balancing/sharing, which needs remote information for the transfer policy, and no additional information for the location policy. We demonstrate here that the introduced virtual construction can keep the exchanged messages to a number favourable to those of the previously known efficient algorithms. To show the above statement and evaluate the performance of our policy, we make use of both analytical and simulation results. By using the simulation model that we developed, we compared our results with one of the most representative and new adaptive, symmetrical, distributed, and efficient algorithms, the Variable Threshold (V THR) algorithm

Abstract: With the proliferation of wireless sensor net-
works and mobile technologies in general, it is possible to
provide improved medical services and also to reduce costs
as well as to manage the shortage of specialized personnel.
Monitoring a person’s health condition using sensors pro-
vides a lot of benefits but also exposes personal sensitive
information to a number of privacy threats. By recording
user-related data, it is often feasible for a malicious or
negligent data provider to expose these data to an unau-
thorized user. One solution is to protect the patient’s pri-
vacy by making difficult a linkage between specific
measurements with a patient’s identity. In this paper we
present a privacy-preserving architecture which builds
upon the concept of
k
-anonymity; we present a clustering-
based anonymity scheme for effective network manage-
ment and data aggregation, which also protects user’s
privacy by making an entity indistinguishable from other
k
similar entities. The presented algorithm is resource
aware, as it minimizes energy consumption with respect to
other more costly, cryptography-based approaches. The
system is evaluated from an energy-consuming and net-
work performance perspective, under different simulation
scenarios.

Abstract: The efficient use of resources and the lossless transfer of data bursts in future optical
networks requires the accurate knowledge of the available bandwidth for each network
link. Such information is important in monitoring congestions and can be used by
appropriate load balancing and congestion avoidance mechanisms. In this paper we
propose a mechanism for monitoring and subsequently managing bandwidth resources,
using the Simple NetworkManagement Protocol (SNMP). In the proposed mechanism,
link bandwidth availability is not a scalar parameter, but a function of time that records
the future utilization of the link. For every output port, each agent-node maintains a
simple data structure in the form of a table that records the utilization profile of that
outgoing link. With the addition of new objects in the Management Information Base
(MIB) of each agent-node and proper synchronization, SNMP can be used to update
and retrieve the reservations made on the links in order to obtain an instant picture of
the network traffic situation.

Abstract: In mobile ad-hoc networks (MANETs), the mobility of the nodes is a complicating factor that significantly affects the effectiveness and performance of the routing protocols. Our work builds upon recent results on the effect of node mobility on the performance of available routing strategies (i.e.~path based, using support) and proposes a protocol framework that exploits the usually different mobility rates of the nodes by adapting the routing strategy during execution. We introduce a metric for the relative mobility of the nodes, according to which the nodes are classified into mobility classes. These mobility classes determine, for any pair of an origin and destination, the routing technique that best corresponds to their mobility properties. Moreover, special care is taken for nodes remaining almost stationary or moving with high (relative) speeds. Our key design goal is to limit the necessary implementation changes required to incorporate existing routing protocols in to our framework. We provide extensive evaluation of the proposed framework, using a well-known simulator (NS2). Our first findings demonstrate that the proposed framework improves, in certain cases, the performance of the existing routing protocols.

Abstract: In ad-hoc mobile networks (MANET), the mobility of the nodes is a complicating factor that significantly affects the effectiveness and performance of the routing protocols. Our work builds upon the recent results on the effect of node mobility on the performance of available routing strategies (i.e.~path based, using support) and proposes a protocol framework that exploits the usually different mobility rates of the nodes by adopting the routing strategy during execution. We introduce a metric for the relative mobility of the nodes, according to which the nodes are classified into mobility classes. These mobility classes determine, for any pair of origin and destination, the routing technique that best corresponds to their mobility properties. Moreover, special care is taken for nodes remaining almost stationary or moving with high (relative) speeds. Our key design goal is to limit the necessery implementation changes required to incorporate existing routing protocols in our framework. We provide extensive evaluation of the proposed framework, using a well-known simulator (NS2). Our first findings demonstrate that the proposed framework improves, in certain cases, the performance of the existing routing protocols.

Abstract: We propose and evaluate a new burst assembly algorithm based on the average delay of the packets comprising a burst. This
method ﬁxes the average delay of the packets belonging to an assembled burst to a desired value TAVE that may be different for
each forwarding equivalence class (FEC). We show that the proposed method signiﬁcantly improves the delay jitter experienced
by the packets during the burst assembly process, when compared to that of timer-based and burst length-based assembly policies.
Minimizing packet delay jitter is important in a number of applications, such as real-audio and streaming-video applications. We
also ﬁnd that the improvement in the packet delay jitter yields a corresponding signiﬁcant improvement in the performance of TCP,
whose operation depends critically on the ability to obtain accurate estimates of the round-trip times (RTT).
c
2007 Elsevier B.V. All rights reserved.

Abstract: We present a new dynamic graph structure specifically suited
for large-scale transportation networks that provides simultaneously three
unique features: compactness, agility and dynamicity. We demonstrate
its practicality and superiority by conducting an experimental study for
shortest route planning in large-scale European and US road networks
with a few dozen millions of nodes and edges. Our approach is the first
one that concerns the dynamic maintenance of a large-scale graph with
ordered elements using a contiguous memory part, and which allows an
arbitrary online reordering of its elements.

Abstract: In this paper, we present a new hybrid optical burst switch architecture (HOBS) that takes advantage of the pre-transmission idle
time during lightpath establishment. In dynamic circuit switching (wavelength routing) networks, capacity is immediately hardreserved
upon the arrival of a setup message at a node, but it is used at least a round-trip time delay later. This waste of resources
is significant in optical multi-gigabit networks and can be used to transmit traffic of a lower class of service in a non-competing
way. The proposed hybrid OBS architecture, takes advantage of this idle time to transmit one-way optical bursts of a lower class of
service, while high priority data explicitly requests and establishes end-to-end lightpaths. In the proposed scheme, the two control
planes (two-way and one-way OBS reservation) are merged, in the sense that each SETUP message, used for the two-way lightpath
establishment, is associated with one-way burst transmission and therefore it is modified to carry routing and overhead information
for the one-way traffic as well. In this paper, we present the main architectural features of the proposed hybrid scheme and further
we assess its performance by conducting simulation experiments on the NSF net backbone topology. The extensive network study
revealed that the proposed hybrid architecture can achieve and sustain an adequate burst transmission rate with a finite worst case
delay.

Abstract: In this work we present the basic concepts in the architecture of a peer-to-peer environment for monitoring multiple wireless sensor networks, called ShareSense. ShareSense, which is currently under development, uses JXTA as a peer-to-peer substrate. We demonstrate its basic functionalities using a simple application scenario, which utilizes multiple disparate wireless sensor networks. This application scenario involves monitoring of such networks using a separate management environment and a custom application GUI, as well as using Google Earth as an additional user interface.

Abstract: In this work we present the architecture and implementation of WebDust, a software platform for managing multiple, heterogeneous (both in terms of software and hardware), geographically disparate sensor networks. We describe in detail the main concepts behind its design, and basic aspects of its implementation, including the services provided to end-users and developers. WebDust uses a peer-to-peer substrate, based on JXTA, in order to unify multiple sensor networks installed in various geographic areas. We aim at providing a software framework that will permit developers to deal with the new and critical aspects that networks of sensors and tiny devices bring into global computing, and to provide a coherent set of high level services, design rules and technical recommendations, in order to be able to develop the envisioned applications of global sensor networks. Furthermore, we give an overview of a deployed distributed testbed, consisting of a total 56 nodes and describing in more detail two specific testbed sites and the integration of the related software and hardware technologies used for its operation with our platform. Finally, we describe the design and implementation of an interface option provided to end-users, based on the popular Google Earth application.

Abstract: We study the problem of data propagation in sensor networks,
comprised of a large number of very small and low-cost nodes,
capable of sensing, communicating and computing. The distributed
co-operation of such nodes may lead to the accomplishment of large
sensing tasks, having useful applications in practice. We present
a new protocol for data propagation towards a control center
(``sink") that avoids flooding by probabilistically favoring
certain (``close to optimal") data transmissions.
This protocol is very simple to implement in sensor devices
and operates under total absence
of co-ordination between sensors. We consider a network model of randomly deployed sensors of sufficient density.
As shown by a geometry analysis,
the protocol is correct, since it always propagates data
to the sink, under ideal network conditions (no failures). Using
stochastic processes, we show that the protocol is very energy efficient. Also, when part of the network is inoperative, the
protocol manages to propagate data very close to the sink, thus in
this sense it is robust. We finally present and discuss
large-scale experimental findings validating the analytical
results.

Abstract: We study the problem of data propagation in sensor networks,
comprised of a large number of very small and low-cost nodes,
capable of sensing, communicating and computing. The distributed
co-operation of such nodes may lead to the accomplishment of large
sensing tasks, having useful applications in practice. We present a new protocol for data propagation towards a control center ("sink") that avoids flooding by probabilistically favoring certain ("close to optimal") data transmissions. Motivated by certain applications and also as a starting point for a rigorous analysis, we study here lattice-shaped sensor networks. We however show that this lattice shape emerges even in randomly deployed sensor networks of sufficient sensor density. Our work is inspired and builds upon the directed diffusion paradigm.
This protocol is very simple to implement in sensor devices, uses only local information and operates under total absence of co-ordination between sensors. We consider a network model of randomly deployed sensors of sufficient density. As shown by a geometry analysis, the protocol is correct, since it always propagates data to the sink, under ideal network conditions (no failures). Using stochastic processes, we show that the protocol is very energy efficient. Also, when part of the network is inoperative, the protocol manages to propagate data very close to the sink, thus in this sense it is robust. We finally present and discuss large-scale experimental findings validating the analytical results.

Abstract: We propose a MAC protocol for mobile ad hoc networks that
uses power control for the RTS/CTS and DATA frame
transmissions in order to improve energy and capacity
utilization efficiency. Unlike IEEE 802.11, in our scheme the
RTS frames are not sent using the maximum transmission
power to silence neighbouring nodes, and the CTS frames do
not silence all receiving nodes to the same degree. In contrast,
the transmission power of the RTS frames follows a slow
start principle, while the CTS frames, which are sent at
maximum transmission power, prevent the neighbouring
nodes from transmitting their DATA frames with power more
than a computed threshold, while allowing them to transmit at
power levels less than that threshold. This is done by
including in the RTS and the CTS frames additional
information, such as the power of the transmissions, and the
interference tolerance of the nodes. Moreover the DATA
frames are sent at the minimum required transmission power
increased by a small margin to ensure connectivity with the
intended receiver, so as to cause minimal interference to
neighbouring nodes and allow for future interference to be
added to the receiver of the DATA frames. The power to be
used by the transmitter is computed by the recipient of the
RTS frame and is included in the CTS frame. It is expected
that a network with such a power management scheme would
achieve a better throughput performance and more power
savings than a network without such a scheme.

Abstract: We propose a MAC protocol for mobile ad hoc networks that
uses power control for the RTS/CTS and DATA frame
transmissions in order to improve energy and capacity
utilization efficiency. Unlike IEEE 802.11, in our scheme the
RTS frames are not sent using the maximum transmission
power to silence neighbouring nodes, and the CTS frames do
not silence all receiving nodes to the same degree. In contrast,
the transmission power of the RTS frames follows a slow
start principle, while the CTS frames, which are sent at
maximum transmission power, prevent the neighbouring
nodes from transmitting their DATA frames with power more
than a computed threshold, while allowing them to transmit at
power levels less than that threshold. This is done by
including in the RTS and the CTS frames additional
information, such as the power of the transmissions, and the
interference tolerance of the nodes. Moreover the DATA
frames are sent at the minimum required transmission power
increased by a small margin to ensure connectivity with the
intended receiver, so as to cause minimal interference to
neighbouring nodes and allow for future interference to be
added to the receiver of the DATA frames. The power to be
used by the transmitter is computed by the recipient of the
RTS frame and is included in the CTS frame. It is expected
that a network with such a power management scheme would
achieve a better throughput performance and more power
savings than a network without such a scheme.

Abstract: In this paper we present a platform for developing mobile, locative and collaborative distributed games comprised of small programmable object technologies (e.g., wireless sensor networks) and traditional networked processors.
The platform is implemented using a combination of JAVA
Standard and Mobile editions, targeting also mobile phones
that have some kind of sensors installed. We brieﬂy present
the architecture of our platform and demonstrate its capabilities by reporting two pervasive multiplayer games. The key
characteristic of these games is that players interact with each
other and their surrounding environment by moving, running
and gesturing as a means to perform game related actions, using small programmable object technologies.

Abstract: We present an improved upper bound on the competitiveness of the online colouring algorithm First-Fit in disk graphs, which
are graphs representing overlaps of disks on the plane. We also show that this bound is best possible for deterministic online
colouring algorithms that do not use the disk representation of the input graph. We also present a related new lower bound for unit
disk graphs.

Abstract: eVoting is a challenging approach for increasing eParticipation. However, lack of citizens¢ trust seems to be a main obstacle that hinders its successful realization. In this paper we propose a trust-centered engineering approach for building eVoting systems that people can trust, based on transparent design and implementation phases. The approach is based on three components: the decomposition of eVoting systems into “layers of trust” for reducing the complexity of managing trust issues in smaller manageable layers, the application of a risk analysis methodology able to identify and document security critical aspects of the eVoting system, and a cryptographically secure eVoting protocol. Our approach is pragmatic rather than theoretical in the sense that it sidesteps the controversy that besets the nature of trust in information systems and starts with a working definition of trust as people¢s positive attitude towards a system that performs its operations transparently.

Abstract: We introduce a new modelling assumption for wireless sensor networks, that of node redeployment (addition of sensor devices during protocol evolution) and we extend the modelling assumption of heterogeneity (having sensor devices of various types). These two features further increase the highly dynamic nature of such networks and adaptation becomes a powerful technique for protocol design. Under these modelling assumptions, we design, implement and evaluate a new power conservation scheme for efficient data propagation. Our scheme is adaptive: it locally monitors the network conditions (density, energy) and accordingly adjusts the sleep-awake schedules of the nodes towards improved operation choices. The scheme is simple, distributed and does not require exchange of control messages between nodes.
Implementing our protocol in software we combine it with two well-known data propagation protocols and evaluate the achieved performance through a detailed simulation study using our extended version of the network simulator ns-2. We focus on highly dynamic scenarios with respect to network density, traffic conditions and sensor node resources. We propose a new general and parameterized metric capturing the trade-offs between delivery rate, energy efficiency and latency. The simulation findings demonstrate significant gains (such as more than doubling the success rate of the well-known Directed Diffusion propagation protocol) and good trade-offs achieved. Furthermore, the redeployment of additional sensors during network evolution and/or the heterogeneous deployment of sensors, drastically improve (when compared to ``equal total power" simultaneous deployment of identical sensors at the start) the protocol performance (i.e. the success rate increases up to four times} while reducing energy dissipation and, interestingly, keeping latency low).

Abstract: Clustering is a crucial network design approach to enable large-scale wireless sensor networks (WSNs) deployments. A large variety of clustering approaches has been presented focusing on different performance metrics. Such protocols usually aim at minimizing communication overhead, evenly distributing roles among the participating nodes, as well as controlling the network topology. Simulations on such protocols are performed using theoretical models that are based on unrealistic assumptions like the unit disk graph communication model, ideal wireless communication channels and perfect energy consumption estimations. With these assumptions taken for granted, theoretical models claim various performance milestones that cannot be achieved in realistic conditions. In this paper, we design a new clustering protocol that adapts to the changes in the environment and the needs and goals of the user applications. We address the issues that hinder its performance due to the real environment conditions and provide a deployable protocol. The implementation, integration and experimentation of this new protocol and it's optimizations, were performed using the \textsf{WISEBED} framework. We apply our protocol in multiple indoors wireless sensor testbeds with multiple experimental scenarios to showcase scalability and trade-offs between network properties and configurable protocol parameters. By analysis of the real world experimental output, we present results that depict a more realistic view of the clustering problem, regarding adapting to environmental conditions and the quality of topology control. Our study clearly demonstrates the applicability of our approach and the benefits it offers to both research \& development communities.

Abstract: We study the problem of secure routing in wireless sensor networks where the sensors and the sink can move during the execution of remote monitoring applications and communication is not necessarily directed towards the sink. We present a new routing protocol that builds upon a collection of mechanisms so that the integrity and confidentiality of the information reported to the controlling authorities is secured. The mechanisms are simple to implement, rely only on local information and require O(1) storage per sensor. The protocol adapts to mobility and security challenges that may arise throughout the execution of the application. We take special care for wireless sensor networks that monitor dynamically changing environments and applications that require its operation for extended periods of time. APSR can detect when the current network conditions are about to change and becomes ready for adaption to the new conditions. We demonstrate how to deal with inside and outside attacks even when the network is adapting to internal and/or external events.

Abstract: Data propagation in wireless sensor networks can be performed either by hop-by-hop single transmissions or by multi-path broadcast of data. Although several energy-aware MAC layer protocols exist that operate very well in the case of single point-to-point transmissions, none is especially designed and suitable for multiple broadcast transmissions. The key idea of our protocols is the passive monitoring of local network conditions and the adaptation of the protocol operation accordingly. The main contribution of our adaptive method is to proactively avoid collisions by implicitly and early enough sensing the need for collision avoidance. Using the above ideas, we design, implement and evaluate three different, new strategies for proactive adaptation. We show, through a detailed and extended simulation evaluation, that our parameter-based family of protocols for multi-path data propagation significantly reduce the number of collisions and thus increase the rate of successful message delivery (to above 90%) by achieving satisfactory trade-offs with the average propagation delay. At the same time, our protocols are shown to be very energy efficient, in terms of the average energy dissipation per delivered message.

Abstract: Energy consumption reserves a large portion of the budget for school buildings. At the same time, the students that use such facilities are the adults of the years to come and thus, should they embrace energy-aware behaviors, then sustainable results with respect to energy efficiency are anticipated. GAIA is a research project targeting this user domain, proposing a set of applications that a) aims at raising awareness, prompting action and fostering engagement in energy efficiency enhancement, and b) is adaptable to the needs of each facility/community. This application set relies on an IoT sensing infrastructure, as well as on the use of humans as sensors to create situational awareness.

Abstract: Smart cities are becoming a vibrant application domain for a number of science fields. As such, service providers and stakeholders are beginning to integrate co-creation aspects into current implementations to shape the future smart city solutions. In this context, holistic solutions are required to test such aspects in real city-scale IoT deployments, considering the complex city ecosystems. In this work, we discuss OrganiCity¢s implementation of an Experimentation-as-a-Service framework, presenting a toolset that allows developing, deploying and evaluating smart city solutions in a one-stop shop manner. This is the first time such an integrated toolset is offered in the context of a large-scale IoT infrastructure, which spans across multiple European cities. We discuss the design and implementation of the toolset, presenting our view on what Experimentation-as-a-Service should provide, and how it is implemented. We present initial feedback from 25 experimenter teams that have utilized this toolset in the OrganiCity project, along with a discussion on two detailed actual use cases to validate our approach. Learnings from all experiments are discussed as well as architectural considerations for platform scaling. Our feedback from experimenters indicates that Experimentation-as-a-Service is a viable and useful approach.

Abstract: Wireless sensor networks are comprised of a vast number of
ultra-small autonomous computing, communication and sensing devices,
with restricted energy and computing capabilities, that co-operate
to accomplish a large sensing task. Such networks can be very useful
in practice, e.g.~in the local monitoring of ambient conditions and
reporting them to a control center. In this paper we propose a
distributed group key establishment protocol that uses mobile agents
(software) and is particularly suitable for energy constrained,
dynamically evolving ad-hoc networks. Our approach totally avoids
the construction and the maintenance of a distributed structure that
reflects the topology of the network. Moreover, it trades-off
complex message exchanges by performing some amount of additional
local computations in order to be applicable at dense and dynamic
sensor networks. The extra computations are simple for the devices
to implement and are evenly distributed across the participants of
the network leading to good energy balance. We evaluate the
performance of our protocol in a simulated environment and compare
our results with existing group key establishment protocols. The
security of the protocol is based on the Diffie-Hellman problem and
we used in our experiments its elliptic curve analog. Our findings
basically indicate the feasibility of implementing our protocol in
real sensor network devices and highlight the advantages and
disadvantages of each approach given the available technology and
the corresponding efficiency (energy, time) criteria.

Abstract: For many random Constraint Satisfaction Problems, by now, we have asymptotically tight estimates of
the largest constraint density for which they have solutions. At the same time, all known polynomial-time algorithms
for many of these problems already completely fail to find solutions at much smaller densities. For example, it is
well-known that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some
of the simplest possible coloring algorithms already achieve this goal. Given the simplicity of those algorithms, one
would expect there is a lot of room for improvement. Yet, to date, no algorithm is known that uses (2 - o)÷ colors,
in spite of efforts by numerous researchers over the years.
In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we believe it is natural to
inquire into its origin. We do so by analyzing the evolution of the set of k-colorings of a random graph, viewed as a
subset of {1, . . . , k}n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense
to a phase transition in the geometry of this set. Roughly, the set of k-colorings looks like a giant ball for k ? 2÷, but
like an error-correcting code for k ? (2 - o)÷. We prove that a completely analogous phase transition also occurs both in random k-SAT and in random hypergraph 2-coloring. And that for each problem, its location corresponds precisely with the point were all known polynomial-time algorithms fail. To prove our results we develop a general technique that allows us to prove rigorously much of the celebrated 1-step Replica-Symmetry-Breaking hypothesis
of statistical physics for random CSPs.

Abstract: For a number of optimization problems on random graphs
and hypergraphs, e.g., k-colorings, there is a very big gap between the
largest average degree for which known polynomial-time algorithms can
find solutions, and the largest average degree for which solutions provably
exist. We study this phenomenon by examining how sets of solutions
evolve as edges are added.We prove in a precise mathematical sense that,
for each problem studied, the barrier faced by algorithms corresponds
to a phase transition in the problems solution-space geometry. Roughly
speaking, at some problem-specific critical density, the set of solutions
shatters and goes from being a single giant ball to exponentially many,
well-separated, tiny pieces. All known polynomial-time algorithms work
in the ball regime, but stop as soon as the shattering occurs. Besides
giving a geometric view of the solution space of random instances our
results provide novel constructions of one-way functions.

Abstract: As a result of recent significant technological advances, a new computing and communication environment, Mobile Ad Hoc Networks (MANET), is about to enter the mainstream. A multitude of critical aspects, including mobility, severe limitations and limited reliability, create a new set of crucial issues and trade-offs that must be carefully taken into account in the design of robust and efficient algorithms for these environments. The communication among mobile hosts is one among the many issues that need to be resolved efficiently before MANET becomes a commodity.
In this paper, we propose to discuss the communication problem in MANET as well as present some characteristic techniques for the design, the analysis and the performance evaluation of distributed communication protocols for mobile ad hoc networks. More specifically, we propose to review two different design techniques. While the first type of protocols tries to create and maintain routing paths among the hosts, the second set of protocols uses a randomly moving subset of the hosts that acts as an intermediate pool for receiving and delivering messages. We discuss the main design choices for each approach, along with performance analysis of selected protocols.

Abstract: About this book
This state-of-the-art survey features papers that were selected after an open call following the International Dagstuhl Seminar on Algorithmic Methods for Railway Optimization held in Dagstuhl Castle, Germany, in June 2004. The second part of the volume constitutes the refereed proceedings of the 4th International Workshop on Algorithmic Methods and Models for Optimization of Railways held in Bergen, Norway, in September 2004.
The volume covers algorithmic methods for analyzing and solving problems arising in railway optimizations, with a special focus on the interplay between railway and other public transportation systems. Beside algorithmics and mathematical optimization, the relevance of formal models and the influence of applications on problem modeling are also considered. In addition, the papers address experimental studies and useful prototype implementations.
The 17 full papers presented here were carefully reviewed and selected from numerous submissions and are organized into topical sections covering network and line planning, timetabling and timetable information, rolling stock and crew scheduling, and real-time operations.

Abstract: An ever growing emphasis is put nowadays in developing personalized journey planning and renewable mobility services in smart cities. These services combine means of scheduled-based public transport and electric vehicles or bikes, using crowdsourcing techniques for collecting real-time traffic information and for assessing the recommended routes. The goal is to develop an information system that will allow the fast, real-time computation of best routes.
The main challenges in developing such an information system are both technological and algorithmic. The technological challenge concerns the collection, storage, management, and updating of a huge volume of transport data that are usually time-dependent, and the provision (through these data) of personalized renewable mobility services in smartphones. This challenge is typically confronted by creating a cloud infrastructure that on the one hand will support the storage, management, and updating of data, while on the other hand it will handle the necessary data feed to the smartphone applications for providing the users with the requested best routes.
The algorithmic challenge concerns the development of innovative algorithms for the efficient provision of journey planning services in smartphones, based on data they will receive from the cloud infrastructure. These services guarantee the computation of realistic and useful best routes, as well as the updating of the precomputed (route and timetable) information, in case of delays of scheduled public transport vehicles, so that the users can online update their routes to destination. The goal is to develop an algorithmic basis for supporting modern renewable mobility services (information systems), such as "mobility on demand'' (where the next leg of a journey is decided in real-time) and "door-to-door'' personalized mobility, in urban scheduled-based public transport environments. Scheduled-based public transport information systems should not only compute in real-time end-user queries requesting best routes, but also to update the timetable information in case of delays.
The core algorithmic issues of mobility and journey planning (regarding the computation of optimal routes under certain criteria) in scheduled-based public transport systems concern the efficient solution of the fundamental earlier arrival (EA) problem (compute a journey from station S to station T minimizing the overall traveling time required to complete the journey), the minimum number of
transfers (MNT) problem (compute a journey from station S to station T minimizing the number of times a passenger is required to change vehicle), and the efficient updating of timetable information system in case of vehicle delays. The EA and MNT problems have been extensively studied in the literature under two main approaches: the array-based modeling (where the timetable is represented as an array) and the graph-based modeling (where the timetable is represented as a graph). Experimental results have shown so far that the array-based approaches are faster in terms of query time than graph-based ones, as they are able to better exploit data locality and do not rely on priority queues. On the other hand, the array-based approaches have not been theoretically or experimentally studied as far as the efficient updating of timetable information, in case of delays, is concerned.
In this thesis, new graph-based models are being developed that solve efficiently the aforementioned fundamental algorithmic mobility problems in urban scheduled-based public transport information systems, along with a mobile application (journey planner) running on Android-based smartphones that includes a service for the evaluation of the recommended routes by the users. In particular:
(a) An extensive comparative evaluation was conducted on graph-based dynamic models that represent big data volumes regarding their suitability for representing timetable information. The study confirmed that the realistic time-expanded model is the most suitable for representing timetable information.
(b) Two new graph-based models have been developed for representing timetable information (in a timetable information system), the reduced time-expanded model and the dynamic timetable model (DTM), both of which are more space-efficient with respect to the realistic time-expanded model. For both of the new models, new efficient algorithms were developed for fast answering of EA and MNT queries, as well as for updating the timetable information representation in case of delays.
(c)An experimental evaluation was conducted with the new graph-based models and their associated query and update algorithms on a set of 14 real-world scheduled-based transportation systems, including the metropolitan areas of Berlin, Athens, London, Rome, and Madrid. The experimental results showed that the query algorithms of the reduced time-expanded model are superior to those of the DTM model, while the reverse is true regarding the update algorithms. In addition, the experimental study showed that the query algorithms of the new graph-based models compete favorably with those of the best array-based models.
(d) A mobile, cloud-based, journey planner (information system) was developed whose core algorithmic engine builds upon the new graph-based models. The mobile application is accompanied by a service that allows the users to assess the recommended journeys. The journey planner demonstrates the practicality of the new graph-based models and their associated query and update algorithms.

Abstract: In this work, we study the Population Protocol model of Angluin et al. from the perspective of protocol verification. In particular, we are interested in algorithmically solving the problem of determining whether a given population protocol conforms to its specifications. Since this is the first work on verification of population protocols, we redefine most notions of population protocols in order to make them suitable for algorithmic verification. Moreover, we formally define the general verification problem and some interesting special cases. All these problems are shown to be NP-hard. We next propose some first algorithmic solutions for a natural special case. Finally, we conduct experiments and algorithmic engineering in order to improve our verifiers' running times.

Abstract: The 3-coloring problem is well known to be NP-complete. It is also well known that it
remains NP-complete when the input is restricted to graphs with diameter 4. Moreover, assuming
the Exponential Time Hypothesis (ETH), 3-coloring cannot be solved in time 2
o
(
n
)
on graphs with
n
vertices and diameter at most 4. In spite of the extensive studies of the 3-coloring problem with
respect to several basic parameters, the complexity status of this problem on graphs with small
diameter, i.e. with diameter at most 2, or at most 3, has been a longstanding and challenging open
question. In this paper we investigate graphs with small diameter. For graphs with diameter at most 2,
we provide the rst subexponential algorithm for 3-coloring, with complexity 2
O
(
p
n
log
n
)
, which is
asymptotically the same as the currently best known time complexity for the graph isomorphism
problem. Furthermore we extend the notion of an articulation vertex to that of an
articulation
neighborhood
, and we provide a polynomial algorithm for 3-coloring on graphs with diameter 2
that have at least one articulation neighborhood. For graphs with diameter at most 3, we establish
the complexity of 3-coloring, even for the case of triangle-free graphs. Namely we prove that for
every
"
2
[0
;
1), 3-coloring is NP-complete on triangle-free graphs of diameter 3 and radius 2 with
n
vertices and minimum degree
=
(
n
"
). Moreover, assuming ETH, we use three dierent amplication
techniques of our hardness results, in order to obtain for every
"
2
[0
;
1) subexponential asymptotic
lower bounds for the complexity of 3-coloring on triangle-free graphs with diameter 3 and minimum
degree
=
(
n
"
). Finally, we provide a 3-coloring algorithm with running time 2
O
(min
f
;
n
log
g
)
for
arbitrary graphs with diameter 3, where
n
is the number of vertices and
(resp.
) is the minimum
(resp. maximum) degree of the input graph. To the best of our knowledge, this algorithm is the rst
subexponential algorithm for graphs with
=
!
(1) and for graphs with
=
O
(1) and
=
o
(
n
).
Due to the above lower bounds of the complexity of 3-coloring, the running time of this algorithm is
asymptotically almost tight when the minimum degree of the input graph is
=
(
n
"
), where
"
2
[
1
2
;
1).

Abstract: In this work we study the important problem of colouring squares of planar graphs (SQPG). We design and implement two new algorithms that colour in a different way SQPG. We call these algorithms MDsatur and RC. We have also implemented and experimentally evaluated the performance of most of the known approximation colouring algorithms for SQPG [14, 6, 4, 10]. We compare the quality of the colourings achieved by these algorithms, with the colourings obtained by our algorithms and with the results obtained from two well-known greedy colouring heuristics. The heuristics are mainly used for comparison reasons and unexpectedly give very good results. Our algorithm MDsatur outperforms the known algorithms as shown by the extensive experiments we have carried out.
The planar graph instances whose squares are used in our experiments are “non-extremal” graphs obtained by LEDA and hard colourable graph instances that we construct.
The most interesting conclusions of our experimental study are:
1) all colouring algorithms considered here have almost optimal performance on the squares of “non-extremal” planar graphs. 2) all known colouring algorithms especially designed for colouring SQPG, give significantly better results, even on hard to colour graphs, when the vertices of the input graph are randomly named. On the other hand, the performance of our algorithm, MDsatur, becomes worse in this case, however it still has the best performance compared to the others. MDsatur colours the tested graphs with 1.1 OPT colours in most of the cases, even on hard instances, where OPT denotes the number of colours in an optimal colouring. 3) we construct worst case instances for the algorithm of Fotakis el al. [6], which show that its theoretical analysis is tight.

Abstract: In this paper, we consider the problem of energy balanced data propagation in wireless sensor networks and we generalise previous works by allowing realistic energy assignment. A new modelisation of the process of energy consumption as a random walk along with a new analysis are proposed. Two new algorithms are presented and analysed. The first one is easy to implement and fast to execute. However, it needs a priori assumptions on the process generating data to be propagated. The second algorithm overcomes this need by inferring information from the observation of the process. Furthermore, this algorithm is based on stochastic estimation methods and is adaptive to environmental changes. This represents an important contribution for propagating energy balanced data in wireless sensor netwoks due to their highly dynamic nature.

Abstract: We introduce a new modelling assumption in wireless sensor networks, that of node redeployment (addition of sensor devices during the protocol evolution) and we extend the modelling assumption of heterogeneity (having sensor devices of various types). These two features further increase the highly dynamic nature of such networks and adaptation becomes a powerful technique for protocol design. Under this model, we design, implement and evaluate a power conservation scheme for efficient data propagation. Our protocol is adaptive: it locally monitors the network conditions (density, energy) and accordingly adjusts the sleep-awake schedules of the nodes towards best operation choices. Our protocol operates does not require exchange of control messages between nodes to coordinate.Implementing our protocol we combine it with two well-known data propagation protocols and evaluate the achieved performance through a detailed simulation study using our extended version of Ns2. We focus in highly dynamic scenarios with respect to network density, traffic conditions and sensor node resources. We propose a new general and parameterized metric capturing the trade-off between delivery rate, energy efficiency and latency. The simulation findings demonstrate significant gains (such as more than doubling the success rate of the well-known Directed Diffusion propagation paradigm) and good trade-offs. Furthermore, redeployment of sensors during network evolution and/or heterogeneous deployment of sensors drastically improve (when compared to equal total "power" simultaneous deployment of identical sensors at the start) the protocol performance (the success rate increases up to four times while reducing energy dissipation and, interestingly, keeping latency low).

Abstract: In this paper, a novel configuration is proposed for
the implementation of an almost all-optical switch architecture
called the scheduling switch, which when combined with appropriate
wait-for-reservation or tell-and-go connection and flow
control protocols provides lossless communication for traffic
that satisfies certain smoothness properties. An all-optical 2 2
exchange/bypass (E/B) switch based on the nonlinear operation
of a semiconductor optical amplifier (SOA) is considered as the
basic building block of the scheduling switch as opposed to active
SOA-based space switches that use injection current to switch
between ON and OFF states. The experimental demonstration of
the optically addressable 2 2 E/B, which is summarized for
10–Gb/s data packets as well as synchronous digital hierarchy
(SDH)/STM-64 data frames, ensures the feasibility of the proposed
configuration at high speeds, with low switching energy and low
losses during the scheduling process. In addition, it provides
reduction of the number of required components for the construction
of the scheduling switch, which is calculated to be 50% in the
number of active elements and 33% in the fiber length.

Abstract: This article presents a novel crawling and
clustering method for extracting and pro-
cessing cultural data from the web in a fully
automated fashion. Our architecture relies
upon a focused web crawler to download we
b documents relevant to culture. The
focused crawler is a web crawler that
searches and processes only those web pages
that are relevant to a particular topic. After downloading the pages, we extract from
each document a number of words for each th
ematic cultural area, filtering the docu-
ments with non-cultural content; we then create multidimensional document vectors
comprising the most frequent cultural term o
ccurrences. We calculate the dissimilarity
between the cultural-related document vect
ors and for each cultural theme, we use
cluster analysis to partition the documents in
to a number of clusters. Our approach is
validated via a proof-of-concept applica
tion which analyzes hundreds of web pages
spanning different cultural thematic areas.

Abstract: We investigate basic communication protocols in ad-hoc mobile networks. We follow the semi-compulsory approach according to which a small part of the mobile users, the support , that moves in a predetermined way is used as an intermediate pool for receiving and delivering messages. Under this approach, we present a new semi-compulsory protocol called the runners in which the members of perform concurrent and continuous random walks and exchange any information given to them by senders when they meet. We also conduct a comparative experimental study of the runners protocol with another existing semi-compulsory protocol, called the snake, in which the members of move in a coordinated way and always remain pairwise adjacent. The experimental evaluation has been carried out in a new generic framework that we developed to implement protocols for mobile computing. Our experiments showed that for both protocols only a small support is required for efficient communication, and that the runners protocol outperforms the snake protocol in almost all types of inputs we considered.

Abstract: Avertical perspective, ranging from management
and routing to physical layer options, concerning dynamic
network monitoring and compensation of impairments
(M&C),is given.Feasibility, reliability,and performance
improvements on reconﬁgurable transparent networksare
expected to arise from the consolidated assessment of network management and control speciﬁcations, as a more accurate evaluation of available M&C techniques. In the network
layer,physical parameters aware algorithms are foreseen to
pursue reliable network performance. In the physical layer,
some new M&C methods were developed and rating of the state-of-the-art reported in literature is given. Optical monitoring implementation and viability is discussed.

Abstract: Raising awareness among young people and changing their behaviour and habits concerning energy usage is key to achieving sustained energy saving. Additionally, young people are very sensitive to environmental protection so raising awareness among children is much easier than with any other group of citizens. This work examines ways to create an innovative Information & Communication Technologies (ICT) ecosystem (including web-based, mobile, social and sensing elements) tailored specifically for school environments, taking into account both the users (faculty, staff, students, parents) and school buildings, thus motivating and supporting young citizens¢ behavioural change to achieve greater energy efficiency. A mixture of open-source IoT hardware and proprietary platforms on the infrastructure level, are currently being utilized for monitoring a fleet of 18 educational buildings across 3 countries, comprising over 700 IoT monitoring points. Hereon presented is the system¢s high-level architecture, as well as several aspects of its implementation, related to the application domain of educational building monitoring and energy efficiency. The system is developed based on open-source technologies and services in order to make it capable of providing open IT-infrastructure and support from different commercial hardware/sensor vendors as well as open-source solutions. The system presented can be used to develop and offer new app-based solutions that can be used either for educational purposes or for managing the energy efficiency of the building. The system is replicable and adaptable to settings that may be different than the scenarios envisioned here (e.g., targeting different climate zones), different IT infrastructures and can be easily extended to accommodate integration with other systems. The overall performance of the system is evaluated in real-world environment in terms of scalability, responsiveness and simplicity.

Abstract: We study the following Constrained Bipartite Edge Coloring problem: We are given a bipartite graph G=(U,V,E) of maximum degree I with n vertices, in which some of the edges have been legally colored with c colors. We wish to complete the coloring of the edges of G minimizing the total number of colors used. The problem has been proved to be NP-hard even for bipartite graphs of maximum degree three. Two special cases of the problem have been previously considered and tight upper and ower bounds on the optimal number of colors were proved. The upper bounds led to 3/2-approximation algorithms for both problems. In this paper we present a randomized (1.37+o(1))-approximation algorithm for the general problem in the case where max{l,c} = {\`u}(ln n). Our techniques are motivated by recent works on the Circular Arc Coloring problem and are essentially different and simpler than the existing ones.

Abstract: Motivated by the wavelength assignment problem in WDM optical networks, we study path coloring problems in graphs. Given a set of paths P on a graph G, the path coloring problem is to color the paths of P so that no two paths traversing the same edge of G are assigned the same color and the total number of colors used is minimized. The problem has been proved to be NP-hard even for trees and rings.
Using optimal solutions to fractional path coloring, a natural relaxation of path coloring, on which we apply a randomized rounding technique combined with existing coloring algorithms, we obtain new upper bounds on the minimum number of colors sufficient to color any set of paths on any graph. The upper bounds are either existential or constructive.
The existential upper bounds significantly improve existing ones provided that the cost of the optimal fractional path coloring is sufficiently large and the dilation of the set of paths is small. Our algorithmic results include improved approximation algorithms for path coloring in rings and in bidirected trees. Our results extend to variations of the original path coloring problem arizing in multifiber WDM optical networks.

Abstract: We consider the Moran process, as generalized by Lieberman, Hauert and Nowak (Nature, 433:312--316, 2005). A population resides on the vertices of a finite, connected, undirected graph and, at each time step, an individual is chosen at random with probability proportional to its assigned 'fitness' value. It reproduces, placing a copy of itself on a neighbouring vertex chosen uniformly at random, replacing the individual that was there. The initial population consists of a single mutant of fitness r>0 placed uniformly at random, with every other vertex occupied by an individual of fitness 1. The main quantities of interest are the probabilities that the descendants of the initial mutant come to occupy the whole graph (fixation) and that they die out (extinction); almost surely, these are the only possibilities. In general, exact computation of these quantities by standard Markov chain techniques requires solving a system of linear equations of size exponential in the order of the graph so is not feasible. We show that, with high probability, the number of steps needed to reach fixation or extinction is bounded by a polynomial in the number of vertices in the graph. This bound allows us to construct fully polynomial randomized approximation schemes (FPRAS) for the probability of fixation (when r≥1) and of extinction (for all r>0).

Abstract: The study of the path coloring problem is motivated by the allocation of optical bandwidth to communication requests in all-optical networks that utilize Wavelength Division Multiplexing (WDM). WDM technology establishes communication between pairs of network nodes by establishing transmitter-receiver paths and assigning wavelengths to each path so that no two paths going through the same fiber link use the same wavelength. Optical bandwidth is the number of distinct wavelengths. Since state-of-the-art technology allows for a limited number of wavelengths, the engineering problem to be solved is to establish communication minimizing the total number of wavelengths used. This is known as the wavelength routing problem. In the case where the underlying network is a tree, it is equivalent to the path coloring problem.
We survey recent advances on the path coloring problem in both undirected and bidirected trees. We present hardness results and lower bounds for the general problem covering also the special case of sets of symmetric paths (corresponding to the important case of symmetric communication). We give an overview of the main ideas of deterministic greedy algorithms and point out their limitations. For bidirected trees, we present recent results about the use of randomization for path coloring and outline approximation algorithms that find path colorings by exploiting fractional path colorings. Also, we discuss upper and lower bounds on the performance of on-line algorithms.

Abstract: We examine the problem of assigning n independent jobs to m unrelated parallel machines, so that each job is processed without interruption on one of the machines, and at any time, every machine processes at most one job. We focus on the case where m is a fixed constant, and present a new rounding approach that yields approximation schemes for multi-objective minimum makespan scheduling with a fixed number of linear cost constraints. The same approach gives approximation schemes for covering problems like maximizing the minimum load on any machine, and for assigning specific or equal loads to the machines.

Abstract: We present a 40 Gb/s asynchronous self-routing network and node architecture that exploits bit
and packet level optical signal processing to perform synchronization, forwarding and
switching. Optical packets are self-routed on a hop-by-hop basis through the network by using
stacked optical tags, each representing a specific optical node. Each tag contains control signals
for configuring the switching matrix and forwarding each packet to the appropriate outgoing
link and onto the next hop. Physical layer simulations are performed, modeling each optical subsystem
of the node showing acceptable signal quality and Bit Error Rates. Resource reservationbased
signaling algorithms are theoretically modeled for the control plane capable of providing
high performance in terms of blocking probability and holding time.

Abstract: A new model for intrusion and its propagation through various attack
schemes in networks is considered. The model is characterized by the number of
network nodes n, and two parameters f and g. Parameter f represents the probability
of failure of an attack to a node and is a gross measure of the level of security of
the attacked system and perhaps of the intruder¢s skills; g represents a limit on
the number of attacks that the intrusion software can ever try, due to the danger
of being discovered, when it issues them from a particular (broken) network node.
The success of the attack scheme is characterized by two factors: the number of
nodes captured (the spread factor) and the number of virtual links that a defense
mechanism has to trace from any node where the attack is active to the origin of
the intrusion (the traceability factor). The goal of an intruder is to maximize both
factors. In our model we present four different ways (attack schemes) by which an
intruder can organize his attacks. Using analytic and experimental methods, we first
show that for any 0 < f < 1, there exists a constant g for which any of our attack
schemes can achieve a {\`E}(n) spread and traceability factor with high probability,
given sufficient propagation time. We also show for three of our attack schemes
that the spread and the traceability factors are, with high probability, linearly related
during the whole duration of the attack propagation. This implies that it will not be
easy for a detection mechanism to trace the origin of the intrusion, since it will have
to trace a number of links proportional to the nodes captured.

Abstract: A new model for intrusion and its propagation through various attack
schemes in networks is considered. The model is characterized by the number of
network nodes n, and two parameters f and g. Parameter f represents the probability
of failure of an attack to a node and is a gross measure of the level of security of
the attacked system and perhaps of the intruder¢s skills; g represents a limit on
the number of attacks that the intrusion software can ever try, due to the danger
of being discovered, when it issues them from a particular (broken) network node.
The success of the attack scheme is characterized by two factors: the number of
nodes captured (the spread factor) and the number of virtual links that a defense
mechanism has to trace from any node where the attack is active to the origin of
the intrusion (the traceability factor). The goal of an intruder is to maximize both
factors. In our model we present four different ways (attack schemes) by which an
intruder can organize his attacks. Using analytic and experimental methods, we first
show that for any 0 < f < 1, there exists a constant g for which any of our attack
schemes can achieve a (n) spread and traceability factor with high probability,
given sufficient propagation time. We also show for three of our attack schemes
that the spread and the traceability factors are, with high probability, linearly related
during the whole duration of the attack propagation. This implies that it will not be
easy for a detection mechanism to trace the origin of the intrusion, since it will have
to trace a number of links proportional to the nodes captured.

Abstract: In this paper, we present BAD, an application-level multi-
cast infrastructure. BAD is designed to improve the perfor-
mance of multicast dissemination trees, under both a static
and a dynamic environment, where the eective bandwidth
of the network links changes with time. Its main goal is
to improve the data rate that end users perceive during
a multicast operation. BAD can be used for the creation
and management of multicast groups. It can be deployed
over any DHT retaining its fundamental advantages of band-
width improvement. BAD consists of a suit of algorithms
for node joins/leaves, bandwidth distribution to heteroge-
neous nodes, tree rearrangement and reduction of overhead.
We have implemented BAD within the FreePastry system.
We report on the results of a detailed performance evalua-
tion which testies for BAD's eciency and low overhead.
Specically, our experiments show that the improvement on
the minimum bandwidth ranges from 40% to 1400% and the
improvement on the average bandwidth ranges from 60% to
250%.

Abstract: Collecting sensory data using a mobile data sink has
been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves
significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.

Abstract: Collecting sensory data using a mobile data sink has been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.

Abstract: Collecting sensory data using a mobile data sink has
been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding
density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to
produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves
significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.

Abstract: In this work, we study the propagation of influence and computation in dynamic distributed computing systems that are possibly disconnected at every instant. We focus on a synchronous message-passing communication model with broadcast and bidirectional links. Our network dynamicity assumption is a worst-case dynamicity controlled by an adversary scheduler, which has received much attention recently. We replace the usual (in worst-case dynamic networks) assumption that the network is connected at every instant by minimal temporal connectivity conditions. Our conditions only require that another causal influence occurs within every time window of some given length. Based on this basic idea, we define several novel metrics for capturing the speed of information spreading in a dynamic network. We present several results that correlate these metrics. Moreover, we investigate termination criteria in networks in which an upper bound on any of these metrics is known. We exploit our termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental counting and all-to-all token dissemination (or gossip) problems.

Abstract: The Team Orienteering Problem with Time Windows (TOPTW)
deals with deriving a number of tours comprising a subset of candidate
nodes (each associated with a \prot" value and a visiting time window)
so as to maximize the overall \prot", while respecting a specied time
span. TOPTW has been used as a reference model for the Tourist Trip
Design Problem (TTDP) in order to derive near-optimal multiple-day
tours for tourists visiting a destination featuring several points of inter-
est (POIs), taking into account a multitude of POI attributes. TOPTW
is an NP-hard problem and the most ecient known heuristic is based on
Iterated Local Search (ILS). However, ILS treats each POI separately;
hence it tends to overlook highly protable areas of POIs situated far
from the current location, considering them too time-expensive to visit.
We propose two cluster-based extensions to ILS addressing the afore-
mentioned weakness by grouping POIs on disjoint clusters (based on
geographical criteria), thereby making visits to such POIs more attrac-
tive. Our approaches improve on ILS with respect to solutions quality,
while executing at comparable time and reducing the frequency of overly
long transfers among POIs.

Abstract: Divisible load scenarios occur in modern media server applications since most multimedia applications typically require access to continuous and discrete data. A high performance Continuous Media (CM) server greatly depends on the ability of its disk IO subsystem to serve both types of workloads efficiently. Disk scheduling algorithms for mixed media workloads, although they play a central role in this task, have been overlooked by related research efforts. These algorithms must satisfy several stringent performance goals, such as achieving low response time and ensuring fairness, for the discrete-data workload, while at the same time guaranteeing the uninterrupted delivery of continuous data, for the continuous-data workload. The focus of this paper is on disk scheduling algorithms for mixed media workloads in a multimedia information server. We propose novel algorithms, present a taxonomy of relevant algorithms, and study their performance through experimentation. Our results show that our algorithms offer drastic improvements in discrete request average response times, are fair, serve continuous requests without interruptions, and that the disk technology trends are such that the expected performance benefits can be even greater in the future.

Abstract: This paper presents ongoing work on using data mining clustering to support the evaluation of software
systems' maintainability. As input for our analysis we employ software measurement data extracted from
Java source code. We propose a two-steps clustering process which facilitates the assessment of a system's
maintainability at rst, and subsequently an in-cluster analysis in order to study the evolution of each
cluster as the system's versions pass by. The process is evaluated on Apache Geronimo, a J2EE 1.4 open
source Application Server. The evaluation involves analyzing several versions of this software system in
order to assess its evolution and maintainability over time. The paper concludes with directions for future
work.

Abstract: An intersection graph of n vertices assumes that each vertex is equipped with a subset of a global label set. Two vertices share an edge
when their label sets intersect. Random Intersection Graphs (RIGs) (as defined in [18, 31]) consider label sets formed by the following experiment:
each vertex, independently and uniformly, examines all the labels (m in total) one by one. Each examination is independent and the vertex
succeeds to put the label in her set with probability p. Such graphs nicely capture interactions in networks due to sharing of resources among nodes.
We study here the problem of efficiently coloring (and of finding upper bounds to the chromatic number) of RIGs. We concentrate in a range
of parameters not examined in the literature, namely: (a) m = n{\'a} for less than 1 (in this range, RIGs differ substantially from the Erd¨os- Renyi random graphs) and (b) the selection probability p is quite high
(e.g. at least ln2 n m in our algorithm) and disallows direct greedy colouring methods.
We manage to get the following results:
For the case mp ln n, for any constant < 1 − , we prove that np colours are enough to colour most of the vertices of the graph with high probability (whp). This means that even for quite dense
graphs, using the same number of colours as those needed to properly colour the clique induced by any label suffices to colour almost all of the vertices of the graph. Note also that this range of values of m, p
is quite wider than the one studied in [4].
� We propose and analyze an algorithm CliqueColour for finding a proper colouring of a random instance of Gn,m,p, for any mp >=ln2 n. The algorithm uses information of the label sets assigned to the
vertices of Gn,m,p and runs in O (n2mp2/ln n) time, which is polynomial in n and m. We also show by a reduction to the uniform random
intersection graphs model that the number of colours required by the algorithm are of the correct order of magnitude with the actual
chromatic number of Gn,m,p.
⋆ This work was partially supported by the ICT Programme of the European Union under contract number ICT-2008-215270 (FRONTS). Also supported by Research Training Group GK-693 of the Paderborn Institute for Scientific Computation
(PaSCo).
� We finally compare the problem of finding a proper colouring for Gn,m,p to that of colouring hypergraphs so that no edge is monochromatic.We show how one can find in polynomial time a k-colouring of the vertices of Gn,m,p, for any integer k, such that no clique induced by only one label in Gn,m,p is monochromatic. Our techniques are novel and try to exploit as much as possible the hidden structure of random intersection graphs in this interesting range.

Abstract: We investigate random intersection graphs, a combinatorial model that quite accurately abstracts distributed networks with local interactions between nodes blindly sharing critical resources from a limited globally available domain. We study important combinatorial properties (independence and hamiltonicity) of such graphs. These properties relate crucially to algorithmic design for important problems (like secure communication and frequency assignment) in distributed networks characterized by dense, local interactions and resource limitations, such as sensor networks. In particular, we prove that, interestingly, a small constant number of random, resource selections suffices to make the graph hamiltonian and we provide tight evaluations of the independence number of these graphs.

Abstract: Clustering is an important research topic for wireless sensor
networks (WSNs). A large variety of approaches has been
presented focusing on dierent performance metrics. Even
though all of them have many practical applications, an ex-
tremely limited number of software implementations is avail-
able to the research community. Furthermore, these very few
techniques are implemented for specic WSN systems or are
integrated in complex applications. Thus it is very difficult
to comparatively study their performance and almost impos-
sible to reuse them in future applications under a dierent
scope. In this work we study a large body of well estab-
lished algorithms. We identify their main building blocks
and propose a component-based architecture for developing
clustering algorithms that (a) promotes exchangeability of
algorithms thus enabling the fast prototyping of new ap-
proaches, (b) allows cross-layer implementations to realize
complex applications, (c) oers a common platform to com-
paratively study the performance of dierent approaches,
(d) is hardware and OS independent. We implement 5 well
known algorithms and discuss how to implement 11 more.
We conduct an extended simulation study to demonstrate
the faithfulness of our implementations when compared to
the original implementations. Our simulations are at very
large scale thus also demonstrating the scalability of the
original algorithms beyond their original presentations. We
also conduct experiments to assess their practicality in real
WSNs. We demonstrate how the implemented clustering
algorithms can be combined with routing and group key es-
tablishment algorithms to construct WSN applications. Our
study clearly demonstrates the applicability of our approach
and the benets it oers to both research & development
communities.

Abstract: We survey here some recent computational models for networks of tiny artifacts. In particular, we focus on networks consisting of artifacts with sensing capabilities. We first imagine the artifacts moving passively, that is, being mobile but unable to control their own movement. This leads us to the population protocol model of Angluin et al. (2004) [16]. We survey this model and some of its recent enhancements. In particular, we also present the mediated population protocol model in which the interaction links are capable of storing states and the passively mobile machines model in which the finite state nature of the agents is relaxed and the agents become multitape Turing machines that use a restricted space. We next survey the sensor field model, a general model capturing some identifying characteristics of many sensor network¢s settings. A sensor field is composed of kinds of devices that can communicate one to the other and also to the environment through input/output data streams. We, finally, present simulation results between sensor fields and population protocols and analyze the capability of their variants to decide graph properties

Abstract: Here we survey various computational models for Wireless Sensor Networks (WSNs). The population protocol model (PP) considers networks of tiny mobile finite-state artifacts that can sense the environment and communicate in pairs to perform a computation. The mediated population protocol model (MPP) enhances the previous model by allowing the communication links to have a constant size buffer, providing more computational power. The graph decision MPP model (GDM) is a special case of MPP that focuses on the MPP's ability to decide graph properties of the network. Another direction towards enhancing the PP is followed by the PALOMA model in which the artifacts are no longer finite-state automata but Turing Machines of logarithmic memory in the population size. A different approach to modeling WSNs is the static synchronous sensor field model (SSSF) which describes devices communicating through a fixed communication graph and interacting with their environment via input and output data streams. In this survey, we present the computational capabilities of each model and provide directions for further research.

Abstract: In this chapter, our focus is on computational network analysis from a theoretical point of view. In particular, we study the \emph{propagation of influence and computation in dynamic distributed computing systems}. We focus on a \emph{synchronous message passing} communication model with bidirectional links. Our network dynamicity assumption is a \emph{worst-case dynamicity} controlled by an adversary scheduler, which has received much attention recently. We first study the fundamental \emph{naming} and \emph{counting} problems (and some variations) in
networks that are \emph{anonymous}, \emph{unknown}, and possibly dynamic. Network dynamicity is modeled here by the \emph{1-interval connectivity model}, in which communication is synchronous and a (worst-case) adversary
chooses the edges of every round subject to the condition that each instance is connected. We then replace this quite strong assumption by minimal \emph{temporal connectivity} conditions. These conditions only require that \emph{another causal influence occurs within every time-window of some given length}. Based on this basic idea we define several novel metrics for capturing the speed of information spreading in a dynamic network. We present several results that correlate these metrics. Moreover, we investigate \emph{termination criteria} in networks in which an upper bound on any of these metrics is known. We exploit these termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental \emph{counting} and \emph{all-to-all token dissemination} (or \emph{gossip}) problems. Finally, we propose another model of worst-case temporal connectivity, called \emph{local
communication windows}, that assumes a fixed underlying communication network and restricts the adversary to allow communication between local neighborhoods in every time-window of some fixed length. We prove some basic properties and provide a protocol for counting in this model.

Abstract: We study the partially eponymous model of distributed computation, which simultaneously
generalizes the anonymous and the eponymous models. In this model, processors have
identities, which are neither necessarily all identical (as in the anonymous model) nor
necessarily unique (as in the eponymous model). In a decision problem formalized as a
relation, processors receive inputs and seek to reach outputs respecting the relation. We
focus on the partially eponymous ring, and we shall consider the computation of circularly
symmetric relations on it. We consider sets of rings where all rings in the set have the same
multiset of identity multiplicities.
We distinguish between solvability and computability: in solvability, processors are
required to always reach outputs respecting the relation; in computability, they must
do so whenever this is possible, and must otherwise report impossibility.
We present a topological characterization of solvability for a relation on a set of rings,
which can be expressed as an efficiently checkable, number-theoretic predicate.
We present a universal distributed algorithm for computing a relation on a set of
rings; it runs any distributed algorithm for constructing views, followed by local steps.
We derive, as our main result, a universal upper bound on the message complexity to
compute a relation on a set of rings; this bound demonstrates a graceful degradation
with the Least Minimum Base, a parameter indicating the degree of least possible
eponymity for a set of rings. Thereafter, we identify two cases where a relation can be
computed on a set of rings, with rings of size n, with an efficient number of O .n lg n/
messages.

Abstract: In this article, we present a detailed performance
evaluation of a hybrid optical switching (HOS)
architecture called Overspill Routing in Optical Networks
(ORION). The ORION architecture combines
(optical) wavelength and (electronic) packet switching,
so as to obtain the individual advantages of both switching
paradigms. In particular, ORION exploits the possible insertions/extractions, to reduce the necessary
interfaces, do not deteriorate performance and thus the
use of traffic concentrators assure ORION’s economic
viability.
idle periods of established lightpaths to transmit
packets destined to the next common node, or even
directly to their common end-destination. Depending
on whether all lightpaths are allowed to simultaneously
carry and terminate overspill traffic or overspill is restricted
to a sub-set of wavelengths, the architecture
limits itself to constrained or un-constrained ORION. To
evaluate both cases, we developed an extensive network
simulator where the basic features of the ORION architectureweremodeled,
including suitable edge/core node
switches and load-varying sources to simulate overloading
traffic conditions. Further, we have assessed various
aspects of the ORION architecture including two
basic routing/forwarding policies and various buffering
schemes. The complete network study shows that
ORION can absorb temporal traffic overloads, as intended,
provided sufficient buffering is present.We also
demonstrate that the restriction of simultaneous packet

Abstract: Ever-increasing bandwidth demands and higher flexibility are the main challenges for the next generation optical core networks. A new trend in order to address these challenges is to consider the impairments of the lightpaths during the design of optical networks. In our work, we focus on translucent optical networks, where some lightpaths are routed transparently, whereas others go through a number of regenerators. We present a cost analysis of design strategies, which are based either on an exact Quality of Transmission (QoT) validation or on a relaxed one and attempt to reduce the amount of regenerators used. In the exact design strategy, regenerators are required if the QoT of a candidate lightpath is below a predefined threshold, assuming empty network conditions. In the relaxed strategy, this predefined threshold is lower, while it is assumed that the network is fully loaded. We evaluate techno-economically the suggested design solutions and also show that adding more flexibility to the optical nodes has a large impact to the total infrastructure cost.

Abstract: We propose a simple and intuitive cost mechanism which assigns costs for the competitive
usage of m resources by n selfish agents. Each agent has an individual demand; demands are
drawn according to some probability distribution. The cost paid by an agent for a resource
it chooses is the total demand put on the resource divided by the number of agents who
chose that same resource. So, resources charge costs in an equitable, fair way, while each
resource makes no profit out of the agents.
We call our model the Fair Pricing model. Its fair cost mechanism induces a noncooperative
game among the agents. To evaluate the Nash equilibria of this game, we
introduce the Diffuse Price of Anarchy, as an extension of the Price of Anarchy that takes
into account the probability distribution on the demands. We prove:
² Pure Nash equilibria may not exist, unless all chosen demands are identical.
² A fully mixed Nash equilibrium exists for all possible choices of the demands. Further
on, the fully mixed Nash equilibrium is the unique Nash equilibrium in case there are
only two agents.
² In the worst-case choice of demands, the Price of Anarchy is £(n); for the special case
of two agents, the Price of Anarchy is less than 2 ¡ 1
m .
² Assume now that demands are drawn from a bounded, independent probability distribution,
where all demands are identically distributed, and each demand may not exceed
some (universal for the class) constant times its expectation. It happens that the constant
is just 2 when each demand is distributed symmetrically around its expectation.
We prove that, for asymptotically large games where the number of agents tends to
infinity, the Diffuse Price of Anarchy is at most that universal constant. This implies
the first separation between Price of Anarchy and Diffuse Price of Anarchy.
Towards the end, we consider two closely related cost sharing models, namely the Average
Cost Pricing and the Serial Cost Sharing models, inspired by Economic Theory. In contrast
to the Fair Pricing model, we prove that pure Nash equilibria do exist for both these models.

Abstract: The purpose of the first Student Workshop on Wireless Sensor Networks is to bring together both graduate and undergraduate students working in the area of wireless sensor networks, with focus on applications, real-world experiments or deployments of wireless sensor networks. Students will have the opportunity to interact with their peers and publicize and get feedback on their work, exchange experiences, make contacts, and learn what other students are doing in the Wireless Sensor Networks area. The workshop would be a day-long program organized in such a way to promote lively discussions.

Abstract: The Greek School Network (GSN) is the nationwide network that connects all units of primary and secondary education in Greece. GSN offers a significant set of diverse services to more than 15.000 schools and administrative units, and more than 60.000 teachers, placing GSN second in infrastructure size nationwide. GSN has relied on the emerging power of open source software to build cutting-edge services capable of covering internal administrative and monitoring needs, end user demands, and, foremost, modern pedagogical requirements for tools and services. GSN provides a wide set of advanced services, varying from web mail to virtual classrooms and synchronous/asynchronous tele-education. This paper presents an evaluation of GSN open source services based on the opinions of users who use GSN for educational purposes, and on usage and traffic measurement statistics. The paper reaches the conclusion that open source software provides a sound technological platform that meets the needs for cutting edge educational services deployment, and innovative, competitive software production for educational networks.

Abstract: In this technical report we present a throughput performance analysis on a heterogeneous wire-
less sensor network which consists of 4 dierent sensor devices (TelosB, SunSPOT, Arduino,
iSense) and the results of a set of experiments during which we gathered LQI/RSSI measure-
ments.

Abstract: Evaluating target tracking protocols for wireless sensor networks that can localize multiple mobile devices, can be a very challenging task. Such protocols usually aim at minimizing communication overhead, data processing for the participating nodes, as well as delivering adequate tracking information of the mobile targets in a timely manner. Simulations on such protocols are performed using theoretical models that are based on unrealistic assumptions like the unit disk graph communication model, ideal network localization and perfect distance estimations. With these assumptions taken for granted, theoretical models claim various performance milestones that cannot be achieved in realistic conditions. In this paper we design a new localization protocol, where mobile assets can be tracked passively via software agents. We address the issues that hinder its performance due to the real environment conditions and provide a deployable protocol. The implementation, integration and experimentation of this new protocol and it's optimizations, were performed using the WISEBED framework. We apply our protocol in multiple indoors wireless sensor testbeds with multiple experimental scenarios to showcase scalability and trade-offs between network properties and configurable protocol parameters. By analysis of the real world experimental output, we present results that depict a more realistic view of the target tracking problem, regarding power consumption and the quality of tracking information. Finally we also conduct some very focused simulations to assess the scalability of our protocol in very large networks and multiple mobile assets.

Abstract: We have designed and implemented a platform that enables monitoring and actuation in multiple buildings, that has been utilised in the context of a research project in Greece, focusing on public school buildings. The Green Mindset project has installed IoT devices in 12 Greek public schools to monitor energy consumption, along with indoor and outdoor environmental parameters. We present the architecture and actual deployment of our system, along with a first set of findings.

Abstract: Wireless sensor networks are comprised of a vast number of ultra-small autonomous computing, communication and sensing devices, with restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Such networks can be very useful in practice, e.g.~in the local monitoring of ambient conditions and reporting them to a control center. In this paper we propose a new lightweight, distributed group key establishment protocol suitable for such energy constrained networks. Our approach basically trade-offs complex message exchanges by performing some amount of additional local computations. The extra computations are simple for the devices to implement and are evenly distributed across the participants of the network leading to good energy balance. We evaluate the performance our protocol in comparison to existing group key establishment protocols both in simulated and real environments. The intractability of all protocols is based on the Diffie-Hellman problem and we used its elliptic curve analog in our experiments. Our findings basically indicate the feasibility of implementing our protocol in real sensor network devices and highlight the advantages and disadvantages of each approach given the available technology and the corresponding efficiency (energy, time) criteria.

Abstract: Wireless sensor networks are comprised of a vast number of ultra-small autonomous computing, communication and sensing devices, with restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Such networks can be very useful in practice, e.g.~in the local monitoring of ambient conditions and reporting them to a control center. In this paper we propose a new lightweight, distributed group key establishment protocol suitable for such energy constrained networks. Our approach basically trade-offs complex message exchanges by performing some amount of additional local computations. The extra computations are simple for the devices to implement and are evenly distributed across the participants of the network leading to good energy balance. We evaluate the performance our protocol in comparison to existing group key establishment protocols both in simulated and real environments. The intractability of all protocols is based on the Diffie-Hellman problem and we used its elliptic curve analog in our experiments. Our findings basically indicate the feasibility of implementing our protocol in real sensor network devices and highlight the advantages and disadvantages of each approach given the available technology and the corresponding efficiency (energy, time) criteria.

Abstract: Wireless Sensor Networks consist of a large number of small, autonomous devices, that are able to interact with their inveronment by sensing and collaborate to fulfill their tasks, as, usually, a single node is incapable of doing so; and they use wireless communication to enable this collaboration. Each device has limited computational and energy resources, thus a basic issue in the applicastions of wireless sensor networks is the low energy consumption and hence, the maximization of the network lifetime.
The collected data is disseminated to a static control point – data sink in the network, using node to node - multi-hop data propagation. However, sensor devices consume significant amounts of energy in addition to increased implementation complexity, since a routing protocol is executed. Also, a point of failure emerges in the area near the control center where nodes relay the data from nodes that are farther away. Recently, a new approach has been developed that shifts the burden from the sensor nodes to the sink. The main idea is that the sink has significant and easily replenishable energy reserves and can move inside the area the sensor network is deployed, in order to acquire the data collected by the sensor nodes at very low energy cost. However, the need to visit all the regions of the network may result in large delivery delays.
In this work we have developed protocols that control the movement of the sink in wireless sensor networks with non-uniform deployment of the sensor nodes, in order to succeed an efficient (with respect to both energy and latency) data collection. More specifically, a graph formation phase is executed by the sink during the initialization: the network area is partitioned in equal square regions, where the sink, pauses for a certain amount of time, during the network traversal, in order to collect data.
We propose two network traversal methods, a deterministic and a random one. When the sink moves in a random manner, the selection of the next area to visit is done in a biased random manner depending on the frequency of visits of its neighbor areas. Thus, less frequently visited areas are favored. Moreover, our method locally determines the stop time needed to serve each region with respect to some global network resources, such as the initial energy reserves of the nodes and the density of the region, stopping for a greater time interval at regions with higher density, and hence more traffic load. In this way, we achieve accelerated coverage of the network as well as fairness in the service time of each region.Besides randomized mobility, we also propose an optimized deterministic trajectory without visit overlaps, including direct (one-hop) sensor-to-sink data transmissions only.
We evaluate our methods via simulation, in diverse network settings and comparatively to related state of the art solutions. Our findings demonstrate significant latency and energy consumption improvements, compared to previous research.

Abstract: Wireless sensor networks are a recently introduced category of ad hoc computer networks, which are comprised by nodes of small size and limited computing and energy resources. Such nodes are able of measuring physical properties such as temperature, humidity, etc., wireless communication between each other and in some cases interaction with their surrounding environments (through the use of electromechanical parts).
As these networks have begun to be widely available (in terms of cost and commercial hardware availability), their field of application and philosophy of use is constantly evolving. We have numerous examples of their applications, ranging from monitoring the biodiversity of a specific outdoor area to structural health monitoring of bridges, and also networks ranging from few tens of nodes to even thousands of nodes.
In this PhD thesis we investigated the following basic research lines related to wireless sensor networks:
a) their simulation,
b) the development of data propagation protocols suited to such networks and their evaluation through simulation,
c) the modelling of ``hostile'' circumstances (obstacles) during their operation and evaluation of their impact through simulation,
d) the development of a sensor network management application.
Regarding simulation, we initially placed an emphasis to issues such as the effective simulation of networks of several thousands of nodes, and in that respect we developed a network simulator (simDust), which is extendable through the addition of new data propagation protocols and visualization capabilities. This simulator was used to evaluate the performance of a number of characteristic data propagation protocols for wireless sensor networks. Furthermore, we developed a new protocol (VRTP) and evaluated its performance against other similar protocols. Our studies show that the new protocol, that uses dynamic changes of the transmission range of the network nodes, performs better in certain cases than other related protocols, especially in networks containing obstacles and in the case of non-homogeneous placement of nodes.
Moreover, we emphasized on the addition of ``realistic'' conditions to the simulation of such protocols, that have an adversarial effect on their operation. Our goal was to introduce a model for obstacles that adds little computational overhead to a simulator, and also study the effect of the inclusion of such a model on data propagation protocols that use geographic information (absolute or relative). Such protocols are relatively sensitive to dynamic topology changes and network conditions. Through our experiments, we show that the inclusion of obstacles during simulation can have a significant effect on these protocols.
Finally, regarding applications, we initially proposed an architecture (WebDust/ShareSense), for the management of such networks, that would provide basic capabilities of managing such networks and developing applications above it. Features that set it apart are the capability of managing multiple heterogeneous sensor networks, openess, the use of a peer-to-peer architecture for the interconnection of multiple sensor network. A large part of the proposed architecture was implemented, while the overall architecture was extended to also include additional visualization capabilities.

Abstract: Wireless sensor networks are comprised of a vast number of devices, situated in an area of interest that self organize in a structureless network, in order to monitor/record/measure an environmental variable or phenomenon and subsequently to disseminate the data to the control center.
Here we present research focused on the development, simulation and evaluation of energy efficient algorithms, our basic goal is to minimize the energy consumption. Despite technology advances, the problem of energy use optimization remains valid since current and emerging hardware solutions fail to solve it.
We aim to reduce communication cost, by introducing novel techniques that facilitate the development of new algorithms. We investigated techniques of distributed adaptation of the operations of a protocol by using information available locally on every node, thus through local choices we improve overall performance. We propose techniques for collecting and exploiting limited local knowledge of the network conditions. In an energy efficient manner, we collect additional information which is used to achieve improvements such as forming energy efficient, low latency and fault tolerant paths to route data. We investigate techniques for managing mobility in networks where movement is a characteristic of the control center as well as the sensors. We examine methods for traversing and covering the network field based on probabilistic movement that uses local criteria to favor certain areas.
The algorithms we develop based on these techniques operate a) at low level managing devices, b) on the routing layer and c) network wide, achieving macroscopic behavior through local interactions. The algorithms are applied in network cases that differ in density, node distribution, available energy and also in fundamentally different models, such as under faults, with incremental node deployment and mobile nodes. In all these settings our techniques achieve significant gains, thus distinguishing their value as tools of algorithmic design.

Abstract: A central problem in distributed computing and telecommunications
is the establishment of common knowledge between two computing
entities. An immediate use of such common knowledge is in the
initiation of a secure communication session between two entities
since the two entities may use this common knowledge in order to
produce a secret key for use with some symmetric cipher.
%
The dynamic establishment of shared information (e.g. secret key)
between two entities is particularly important in networks with no
predetermined structure such as wireless mobile ad-hoc networks.
In such networks, nodes establish and terminate communication
sessions dynamically with other nodes which may have never been
encountered before in order to somehow exchange information which
will enable them to subsequently communicate in a secure manner.
%
In this paper we give and theoretically analyze a protocol that
enables two entities initially possessing a string each to
securely eliminate inconsistent bit positions, obtaining strings
with a larger percentage of similarities. This can help the nodes
establish a shared set of bits and use it as a key with some
shared key encryption scheme.

Abstract: In this paper, we discuss key findings, technological challenges and socioeconomic opportunities in Smart City era. Most of the conclusions were gathered during SmartSantander project, an EU project that is developing a city-scale testbed for IoT and Future Internet experimentation, providing an integrated framework for implementing Smart City services.

Abstract: We here present Fun in Numbers (FinN), a framework for developing pervasive applications and interactive installations for entertainment and educational purposes. Using ad hoc mobile wireless sensor network nodes as the enabling devices, FinN allows for the quick prototyping of applications that utilize input from multiple physical sources (sensors and other means of interfacing), by offering a set of programming templates and services, such as topology discovery, localization and synchronization, that hide the underlying complexity. We present the target application domains of FinN, along with a set of multiplayer games and interactive installations. We describe the overall architecture of our platform
and discuss some key implementation issues of the application domains. Finally, we present the experience gained by deploying the applications developed with our platform.

Abstract: The domain of smart cities is currently burgeoning, with a lot of potential for scientific and socio-economic innovation gradually being revealed. It is also becoming apparent that cross-discipline research will be instrumental in designing and building smarter cities, where IoT technology is becoming omnipresent. SmartSantander is an FP7 project that built a massive
city-scale IoT testbed aiming to provide both a tool for the research community and a functional system for the local government to implement operational city
services. In this work, we present key smart cities projects, main application domains and representative smart city frameworks that reflect the latest advances in the smart cities domain and our own experience through SmartSantander. The project has deployed 51.910 IoT endpoints, offering a massive infrastructure to the community, as well as functional system services and a number of end-user applications. Based on these aspects, we identify and
discuss a number of key scientific and technological challenges. We also present an overview of the developed system components and applications, and
discuss the ways that current smart city challenges were handled in the project.

Abstract: Direct routing is the special case of bufferless routing where N packets, once injected into the network, must be routed along specific paths to their destinations without conflicts. We give a general treatment of three facets of direct routing: (i) Algorithms. We present a polynomial time greedy algorithm for arbitrary direct routing problems whch is worst-case optimal, i.e., there exist instances for which no direct routing algorithm is better than the greedy. We apply variants of this algorithm to commonly used network topologies. In particular, we obtain near-optimal routing time using for the tree and d-dimensional mesh, given arbitrary sources and destinations; for the butterfly and the hypercube, the same result holds for random destinations. (ii) Complexity. By a reduction from Vertex Coloring, we show that Direct Routing is inapproximable, unless P=NP. (iii) Lower Bounds for Buffering. We show the existence of routing problems which cannot be solved efficiently with direct routing. To solve these problems, any routing algorithm needs buffers. We give nontrivial lower bounds on such buffering requirements for general routing algorithms.

Abstract: When one engineers distributed algorithms, some special characteristics
arise that are different from conventional (sequential or parallel)
computing paradigms. These characteristics include: the need for either a
scalable real network environment or a platform supporting a simulated
distributed environment; the need to incorporate asynchrony, where arbitrarya
synchrony is hard, if not impossible, to implement; and the generation
of “difficult” input instances which is a particular challenge. In this
work, we identifys ome of the methodological issues required to address
the above characteristics in distributed algorithm engineering and illustrate
certain approaches to tackle them via case studies. Our discussion
begins byad dressing the need of a simulation environment and how asynchronyis
incorporated when experimenting with distributed algorithms.
We then proceed bys uggesting two methods for generating difficult input
instances for distributed experiments, namelya game-theoretic one and another
based on simulations of adversarial arguments or lower bound proofs.
We give examples of the experimental analysis of a pursuit-evasion protocol
and of a shared memorypro blem in order to demonstrate these ideas.
We then address a particularlyi nteresting case of conducting experiments
with algorithms for mobile computing and tackle the important issue of
motion of processes in this context. We discuss the two-tier principle as
well as a concurrent random walks approach on an explicit representation
of motions in ad-hoc mobile networks, which allow at least for averagecase
analysis and measurements and may give worst-case inputs in some
cases. Finally, we discuss a useful interplay between theory and practice
that arise in modeling attack propagation in networks.

Abstract: Wireless sensor networks are comprised of a vast number of ultra-small fully autonomous computing, communication and sensing devices, with very restricted energy and computing capabilities, which co-operate to accomplish a large sensing task. Such networks can be very useful in practice in applications that require fine-grain monitoring of physical environment subjected to critical conditions (such as inaccessible terrains or disaster places). Very large numbers of sensor devices can be deployed in areas of interest and use self-organization and collaborative methods to form deeply networked environments. Features including the huge number of sensor devices involved, the severe power, computational and memory limitations, their dense deployment and frequent failures, pose new design and implementation aspects. The efficient and robust realization of such large, highly-dynamic, complex, non-conventional environments is a challenging algorithmic and technological task. In this work we consider certain important aspects of the design, deployment and operation of distributed algorithms for data propagation in wireless sensor networks and discuss some characteristic protocols, along with an evaluation of their performance.

Abstract: We exploit the game-theoretic ideas presented in [12] to
study the vertex coloring problem in a distributed setting. The vertices
of the graph are seen as players in a suitably defined strategic game,
where each player has to choose some color, and the payoff of a vertex is
the total number of players that have chosen the same color as its own.
We extend here the results of [12] by showing that, if any subset of nonneighboring
vertices perform a selfish step (i.e., change their colors in order
to increase their payoffs) in parallel, then a (Nash equilibrium) proper
coloring, using a number of colors within several known upper bounds
on the chromatic number, can still be reached in polynomial time. We
also present an implementation of the distributed algorithm in wireless
networks of tiny devices and evaluate the performance in simulated and
experimental environments. The performance analysis indicates that it
is the first practically implementable distributed algorithm.

Abstract: Counting items in a distributed system, and estimating the cardinality of multisets in particular,
is important for a large variety of applications and a fundamental building block for emerging Internet-scale information systems. Examples of such applications range from optimizing query access plans in peer-to-peer data sharing, to computing the significance (rank/score) of data items in distributed information retrieval. The general formal problem addressed in this article is computing the network-wide distinct number of items with some property (e.g., distinct files with file name
containing “spiderman”) where each node in the network holds an arbitrary subset, possibly overlapping the subsets of other nodes. The key requirements that a viable approach must satisfy are:
(1) scalability towards very large network size, (2) efficiency regarding messaging overhead, (3) load
balance of storage and access, (4) accuracy of the cardinality estimation, and (5) simplicity and easy
integration in applications. This article contributes the DHS (Distributed Hash Sketches) method
for this problem setting: a distributed, scalable, efficient, and accurate multiset cardinality estimator.
DHSis based on hash sketches for probabilistic counting, but distributes the bits of each counter
across network nodes in a judicious manner based on principles of Distributed Hash Tables, paying
careful attention to fast access and aggregation as well as update costs. The article discusses various
design choices, exhibiting tunable trade-offs between estimation accuracy, hop-count efficiency, and
load distribution fairness. We further contribute a full-fledged, publicly available, open-source implementation of all our methods, and a comprehensive experimental evaluation for various settings.

Abstract: In this paper we describe a new simulation platform for heterogeneous distributed systems comprised of small programmable objects (e.g., wireless sensor networks) and traditional networked processors. Simulating such systems is complicated because of the need to coordinate compilers and simulators, often with very different interfaces, options, and fidelities.
Our platform (which we call ADAPT) is a flexible and extensible environment that provides a highly scalable simulator with unique characteristics. While the platform provides advanced functionality such as real-time simulation monitoring, custom topologies and scenarios, mixing real and simulated nodes, etc., the effort required by the user and the impact to her code is minimal. We here present its architecture, the most important design decisions, and discuss its distinct features and functionalities. We integrate our simulator to the Sun SPOT platform to enable simulation of sensing applications that employ both low-end and high-end devices programmed with different languages that are internetworked with heterogeneous technologies. We believe that ADAPT will make the development of applications that use small programmable objects more widely accessible and will enable researchers to conduct a joint research approach that combines both theory and practice.

Abstract: We address the issue of measuring distribution fairness in Internet-scale networks. This problem has several interesting instances encountered in different applications, ranging from assessing the distribution of load between network nodes for load balancing purposes, to measuring node utilization for optimal resource exploitation, and to guiding autonomous decisions of nodes in networks built with market-based economic principles. Although some metrics have been proposed, particularly for assessing load balancing algorithms, they fall short. We first study the appropriateness of various known and previously proposed statistical metrics for measuring distribution fairness. We put forward a number of required characteristics for appropriate metrics. We propose and comparatively study the appropriateness of the Gini coefficient (G) for this task. Our study reveals as most appropriate the metrics of G, the fairness index (FI), and the coefficient of variation (CV) in this order. Second, we develop six distributed sampling algorithms to estimate metrics online efficiently, accurately, and scalably. One of these algorithms (2-PRWS) is based on two effective optimizations of a basic algorithm, and the other two (the sequential sampling algorithm, LBS-HL, and the clustered sampling one, EBSS) are novel, developed especially to estimate G. Third, we show how these metrics, and especially G, can be readily utilized online by higher-level algorithms, which can now know when to best intervene to correct unfair distributions (in particular, load imbalances). We conclude with a comprehensive experimentation which comparatively evaluates both the various proposed estimation algorithms and the three most appropriate metrics (G, CV, andFI). Specifically, the evaluation quantifies the efficiency (in terms of number of the messages and a latency indicator), precision, and accuracy achieved by the proposed algorithms when estimating the competing fairness metrics. The central conclusion is that the proposed metric, G, can be estimated with a small number of messages and latency, regardless of the skew of the underlying distribution.

Abstract: In this paper we present the efficient burst reservation protocol (EBRP) suitable
for bufferless optical burst switching (OBS) networks. The EBRP protocol is a
two-way reservation scheme that employs timed and in-advance reservation of
resources. In the EBRP protocol timed reservations are relaxed, introducing a
reservation time duration parameter that is negotiated during call setup phase.
This feature allows bursts to reserve resources beyond their actual size to
increase their successful forwarding probability and can be used to provide
quality-of-service (QoS) differentiation. The EBRP protocol is suitable for
OBS networks and can guarantee a low blocking probability for bursts that can
tolerate the round-trip delay associated with the two-way reservation.We present
the main features of the proposed protocol and describe in detail the timing
considerations regarding the call setup phase and the actual reservation process.
Furthermore, we show evaluation results and compare the EBRP performance
against two other typical reservation schemes, a tell-and-wait and a tell-and-go
(just-enough-time) like protocol. EBRP has been developed for the control plane
of the IST-LASAGNE project.

Abstract: We investigate the Vehicle Routing Problem with Time Windows (VRPTW) under an eco-friendly framework that demands the delivery of balanced and compact customer clusters. We present a new approach consisting of three major phases: (i) a first clustering of customers with compatible time windows; (ii) a second clustering of customers with close geographic proximity based on various methods (natural cuts, KaHIP, quad trees); (iii) a refinement phase that either splits a cluster into smaller ones, or merges clusters to form a bigger compact cluster. Our approach turns out to be beneficial when used in an on-line environment, where changes to the initial tour are requested (add a new customer to the tour or drop some customers). The new method serves as a warm starting point for re-evaluating and further optimizing the solution of VRPTW. Experiments with real data sets demonstrate that our approach compares favorably with standard approaches that start from a basic (cold) solution.

Abstract: We present three new coordination mechanisms for schedul-
ing n sel¯sh jobs on m unrelated machines. A coordination
mechanism aims to mitigate the impact of sel¯shness of jobs
on the e±ciency of schedules by de¯ning a local schedul-
ing policy on each machine. The scheduling policies induce
a game among the jobs and each job prefers to be sched-
uled on a machine so that its completion time is minimum
given the assignments of the other jobs. We consider the
maximum completion time among all jobs as the measure
of the e±ciency of schedules. The approximation ratio of
a coordination mechanism quanti¯es the e±ciency of pure
Nash equilibria (price of anarchy) of the induced game. Our
mechanisms are deterministic, local, and preemptive in the
sense that the scheduling policy does not necessarily process
the jobs in an uninterrupted way and may introduce some
idle time. Our ¯rst coordination mechanism has approxima-
tion ratio O(logm) and always guarantees that the induced
game has pure Nash equilibria to which the system con-
verges in at most n rounds. This result improves a recent
bound of O(log2 m) due to Azar, Jain, and Mirrokni and,
similarly to their mechanism, our mechanism uses a global
ordering of the jobs according to their distinct IDs. Next
we study the intriguing scenario where jobs are anonymous,
i.e., they have no IDs. In this case, coordination mechanisms
can only distinguish between jobs that have diffeerent load
characteristics. Our second mechanism handles anonymous
jobs and has approximation ratio O
¡ logm
log logm
¢
although the
game induced is not a potential game and, hence, the exis-
tence of pure Nash equilibria is not guaranteed by potential
function arguments. However, it provides evidence that the
known lower bounds for non-preemptive coordination mech-
anisms could be beaten using preemptive scheduling poli-
cies. Our third coordination mechanism also handles anony-
mous jobs and has a nice \cost-revealing" potential func-
tion. Besides in proving the existence of equilibria, we use
this potential function in order to upper-bound the price of stability of the induced game by O(logm), the price of an-
archy by O(log2 m), and the convergence time to O(log2 m)-
approximate assignments by a polynomial number of best-
response moves. Our third coordination mechanism is the
¯rst that handles anonymous jobs and simultaneously guar-
antees that the induced game is a potential game and has
bounded price of anarchy.

Abstract: Data collection is usually performed in wireless sensor networks by the sensors
relaying data towards a static control center (sink). Motivated by important
applications (mostly related to ambient intelligence and remote monitoring)
and as a first step towards introducing mobility, we propose the basic
idea of having a sink moving in the network area and collecting
data from sensors. We propose four characteristic mobility patterns
for the sink along with different data collection strategies. Through a
detailed simulation study, we evaluate several important performance properties of
each approach. Our findings demonstrate that by taking advantage
of the sink's mobility and shifting work from sensors to the powerful sink,
we can significantly reduce the energy spent in relaying traffic and thus greatly
extend the lifetime of the network.

Abstract: Through recent technology advances in the eld of wireless energy transmission, Wireless Rechargeable Sensor Networks
(WRSN) have emerged. In this new paradigm for
WSNs a mobile entity called Mobile Charger (MC) traverses
the network and replenishes the dissipated energy of sensors.
In this work we rst provide a formal denition of the charging
dispatch decision problem and prove its computational
hardness. We then investigate how to optimize the tradeo
s of several critical aspects of the charging process such
as a) the trajectory of the charger, b) the dierent charging
policies and c) the impact of the ratio of the energy
the MC may deliver to the sensors over the total available
energy in the network. In the light of these optimizations,
we then study the impact of the charging process to the
network lifetime for three characteristic underlying routing
protocols; a greedy protocol, a clustering protocol and an
energy balancing protocol. Finally, we propose a Mobile
Charging Protocol that locally adapts the circular trajectory
of the MC to the energy dissipation rate of each sub-region
of the network. We compare this protocol against several
MC trajectories for all three routing families by a detailed
experimental evaluation. The derived ndings demonstrate
signicant performance gains, both with respect to the no
charger case as well as the dierent charging alternatives; in
particular, the performance improvements include the network
lifetime, as well as connectivity, coverage and energy
balance properties.

Abstract: The Time Dependent Team Orienteering Problem with Time Windows (TDTOPTW) can be used to model several real life problems. Among them, the route planning problem for tourists interested in visiting multiple points of interest (POIs) using public transport. The main objective of this problem is to select POIs that match tourist preferences, while taking into account a multitude of parameters and constraints and respecting the time available for sightseeing in a daily basis. TDTOPTW is NP-hard while almost the whole body of the related literature addresses the non time dependent version of the problem. The only TDTOPTW heuristic proposed so far is based on the assumption of periodic service schedules. Herein, we propose two efficient cluster-based heuristics for the TDTOPTW which yield high quality solutions, take into account time dependency in calculating travel times between POIs and make no assumption on periodic service schedules. The validation scenario for our prototyped algorithms included the metropolitan transit network and real POI sets compiled from Athens (Greece).

Abstract: We study the problem of localizing and tracking multiple moving targets in wireless sensor
networks, from a network design perspective i.e. towards estimating the least possible number
of sensors to be deployed, their positions and operation chatacteristics needed to perform the
tracking task. To avoid an expensive massive deployment, we try to take advantage of
possible coverage ovelaps over space and time, by introducing a novel combinatorial model
that captures such overlaps.
Under this model, we abstract the tracking network design problem by a combinatorial
problem of covering a universe of elements by at least three sets (to ensure that each point in
the network area is covered at any time by at least three sensors, and thus being localized). We
then design and analyze an efficient approximate method for sensor placement and operation,
that with high probability and in polynomial expected time achieves a (log n) approximation
ratio to the optimal solution. Our network design solution can be combined with alternative
collaborative processing methods, to suitably fit different tracking scenaria.

Abstract: We study the problem of localizing and tracking multiple moving targets in wireless sensor networks, from a network design perspective i.e. towards estimating the least possible number of sensors to be deployed, their positions and operation characteristics needed to perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps.
Under this model, we abstract the tracking network design problem by a combinatorial problem of covering a universe of elements by at least three sets (to ensure that each point in the network area is covered at any time by at least three sensors, and thus being localized). We then design and analyze an efficient approximate method for sensor placement and operation, that with high probability and in polynomial expected time achieves a {\`E}(logn) approximation ratio to the optimal solution. Our network design solution can be combined with alternative collaborative processing methods, to suitably fit different tracking scenarios.

Abstract: Orthogonal Frequency Division Multiplexing (OFDM)
has recently been proposed as a modulation technique for optical networks, because of its good spectral efficiency, flexibility, and tolerance to impairments. We consider the planning problem of an OFDM optical network, where we are given a traffic matrix that includes the requested transmission rates of the connections to be served. Connections are provisioned for their requested rate by elastically allocating spectrum using a variable number of OFDM subcarriers and choosing an appropriate modulation level, taking into account the transmission distance. We introduce the Routing, Modulation Level and Spectrum Allocation (RMLSA) problem, as opposed to the typical Routing and Wavelength Assignment (RWA) problem of traditional WDM networks, prove that is also NP-complete and present various algorithms to solve it. We start by presenting an optimal ILP RMLSA algorithm that minimizes the spectrum used to serve the traffic matrix, and also present a decomposition method that breaks RMLSA into its two
substituent subproblems, namely, (i) routing and modulation level, and (ii) spectrum allocation (RML+SA), and solves them sequentially. We also propose a heuristic algorithm that serves connections one-by-one and use it to solve the planning problem by sequentially serving all the connections in the traffic matrix. In the sequential algorithm, we investigate two policies for defining the order in which connections are considered. We also use a simulated annealing meta-heuristic to obtain even better orderings. We examine the performance of the proposed algorithms through simulation experiments and evaluate the spectrum utilization benefits that can be obtained by utilizing OFDM elastic bandwidth allocation, when compared to a traditional WDM network.

Abstract: One of the most important applications of wireless sensor
networks is building monitoring and more specically, the
early detection of emergency events and the provision of
guidance for safe evacuation of the building. In this pa-
per, we describe a demo application that, in the event of a
re inside a monitored building, uses the information from
the deployed sensor network in order to nd the shortest
safest path away from the emergency and provides naviga-
tion guidance to the occupants (modelled by a mobile robot),
in order to safely evacuate the building. For this demo, we
developed our own ad-hoc robot-sensor interconnection us-
ing expansion connectors and programming in a low-level
language.

Abstract: Modern Wireless Sensor Networks offer an easy, lowcost
and reliable alternative to the back-end for monitoring
and controlling large geographical areas like Buildings
and Industries. We present the design and implementation details of an open and efficient Prototype System as a solution for low-cost BMS that comprises of heterogeneous, small-factor wireless devices. Placing that in the context of Internet of Things we come up with a solution that can cooperate with other systems installed on the same site to lower power consumption and costs as well as benefit humans that use its services in an transparent way. We evaluate and assess key aspects of the performance of our prototype. Our findings indicate specific approaches
to reduce the operation costs and allow the development of open applications.

Abstract: Urban ecosystems are becoming one of the most potentially attractive scenarios for innovating new services and technologies. In parallel, city managers, urban utilities and other stakeholders are fostering the intensive use of advanced technologies aiming at improving present city performance and its sustainability. The deployment of such technology entails the generation of massive amounts of information which in many cases might become useful for other services and applications. Hence, aiming at taking advantage of such massive amounts of information and deployed technology as well as breaking the potential digital barrier that can be raised, some easy-to-use tools have to be made available to the urban stakeholders. These tools integrated in a platform, operated directly or indirectly by the city, provides a singular opportunity for exploiting the concept of connected city whilst fostering innovation in all city dimensions and making the co-creation concept a reality and eventually impacting on government policies.

Abstract: Few IoT systems monitoring energy consumption in buildings have focused on the educational community. IoT in the educational domain can jump-start a process of sustainability awareness and behavioral change towards energy savings, as well as provide tangible financial savings. We present a real-world multi-site IoT deployment, comprising 19 school buildings, aiming at enabling IoT-based energy awareness and sustainability lectures, promoting energy-saving behaviors supported by IoT data. We discuss scenarios where IoT-enabled applications are integrated into school life, providing an engaging and hands-on approach, based on real data, generating value in terms of educational and energy savings outcomes. We also present a set of first results, based on the analysis of school-building data, which highlight potential ways to identify irregularities and inefficiencies.

Abstract: We study the problem of energy-balanced data propagation
in wireless sensor networks. The energy balance property
guarantees that the average per sensor energy dissipation
is the same for all sensors in the network, during
the entire execution of the data propagation protocol. This
property is important since it prolongs the network¢s lifetime
by avoiding early energy depletion of sensors.
We propose a new algorithm that in each step decides
whether to propagate data one-hop towards the final destination
(the sink), or to send data directly to the sink. This
randomized choice balances the (cheap) one-hop transimssions
with the direct transimissions to the sink, which are
more expensive but “bypass” the sensors lying close to the
sink. Note that, in most protocols, these close to the sink
sensors tend to be overused and die out early.
By a detailed analysis we precisely estimate the probabilities
for each propagation choice in order to guarantee
energy balance. The needed estimation can easily be performed
by current sensors using simple to obtain information.
Under some assumptions, we also derive a closed form
for these probabilities.
The fact (shown by our analysis) that direct (expensive)
transmissions to the sink are needed only rarely, shows that
our protocol, besides energy-balanced, is also energy efficient.

Abstract: We study the problem of energy-balanced data propagation in wireless sensor networks. The energy balance property guarantees that the average per sensor energy dissipation is the same for all sensors in the network, during the entire execution of the data propagation protocol. This property is important since it prolongs the network¢:s lifetime by avoiding early energy depletion of sensors.
We propose a new algorithm that in each step decides whether to propagate data one-hop towards the final destination (the sink), or to send data directly to the sink. This randomized choice balances the (cheap) one-hop transimssions with the direct transimissions to the sink, which are more expensive but “bypass” the sensors lying close to the sink. Note that, in most protocols, these close to the sink sensors tend to be overused and die out early.
By a detailed analysis we precisely estimate the probabilities for each propagation choice in order to guarantee energy balance. The needed estimation can easily be performed by current sensors using simple to obtain information. Under some assumptions, we also derive a closed form for these probabilities.
The fact (shown by our analysis) that direct (expensive) transmissions to the sink are needed only rarely, shows that our protocol, besides energy-balanced, is also energy efficient.

Abstract: Wireless Sensor Networks are comprised of a vast number of ultra-small, autonomous computing and communication devices, with restricted energy, that co-operate to accomplish a large sensing task. In this work: a) We propose extended versions of two data propagation protocols for such networks: the Sleep-Awake Probabilistic Forwarding Protocol (SW-PFR) and the Hierarchical Threshold sensitive Energy Efficient Network protocol (H-TEEN). These non-trivial extensions improve the performance of the original protocols, by introducing sleep-awake periods in the PFR protocol to save energy, and introducing a hierarchy of clustering in the TEEN protocol to better cope with large networks, b) We implemented the two protocols and performed an extensive simulation comparison of various important measures of their performance with a focus on energy consumption, c) We investigate in detail the relative advantages and disadvantages of each protocol, d) We discuss a possible hybrid combination of the two protocols towards optimizing certain goals.

Abstract: A crucial issue in wireless networks is to support efficiently communication patterns that are typical in traditional (wired) networks. These include broadcasting, multicasting, and gossiping (all-to-all communication). In this work we study such problems in static ad hoc networks. Since, in ad hoc networks, energy is a scarce resource, the important engineering question to be solved is to guarantee a desired communication pattern minimizing the total energy consumption. Motivated by this question, we study a series of wireless network design problems and present new approximation algorithms and inapproximability results.

Abstract: Many efforts have been done in the last years to model public transport timetables in order to
find optimal routes. The proposed models can be classified into two types: those representing the
timetable as an array, and those representing it as a graph. The array-based models have been
shown to be very effective in terms of query time, while the graph-based models usually answer
queries by computing shortest paths, and hence they are suitable to be used in combination with
speed-up techniques developed for road networks.
In this paper, we focus on the dynamic behavior of graph-based models considering the case
where transportation systems are subject to delays with respect to the given timetable. We
make three contributions: (i) we give a simplified and optimized update routine for the wellknown
time-expanded model along with an engineered query algorithm; (ii) we propose a new
graph-based model tailored for handling dynamic updates; (iii) we assess the effectiveness of
the proposed models and algorithms by an experimental study, which shows that both models
require negligible update time and a query time which is comparable to that required by some
array-based models.

Abstract: Core optical networks using reconfigurable optical
switches and tunable lasers appear to be on the road towards
widespread deployment and could evolve to all-optical mesh
networks in the coming future. Considering the impact of physical
layer impairments in the planning and operation of all-optical
(and translucent) networks is the main focus of the DICONET
project. The impairment aware network planning and operation
tool (NPOT) is the main outcome of DICONET project, which
is explained in detail in this paper. The key building blocks of
the NPOT, consisting of network description repositories, the
physical layer performance evaluator, the impairment aware
routing and wavelength assignment engines, the component
placement modules, failure handling and the integration of
NPOT in the control plane are the main contributions of this
work. Besides, the experimental result of DICONET proposal for
centralized and distributed control plane integration schemes and
the performance of the failure handling in terms of restoration
time is presented in this work.

Abstract: We present key aspects (hardware, software, topology, networking) of SenseWall, an experimental sensor network test-bed we have created for the implementation and engineering of distributed sensor network algorithms. We then describe how SenseWall has been in particular used to implement two recent state of the art algorithms for energy balanced sensor data propagation. We elaborate on the issues and challenges created by the restrictions and particularities of the experimental test-bed and how we dealt with them. We also carry out a detailed performance evaluation comparing the energy balance protocols to two baseline protocols that include only either single hop or direct data transmissions.

Abstract: We investigate the efficiency of protocols for basic communication in ad-hoc mobile networks. All protocols are classified in the semi-compulsory, relay category according to which communication is achieved through a small team of mobile hosts, called the support, which move in a predetermined way and serves as an intermediate pool for receiving and delivering messages. We implement a new semi-compulsory, relay protocol in which the motion of the support is based on a so-called ``hunter" strategy developed for a pursuit-evasion game. We conduct an extensive, comparative experimental study of this protocol with other two existing protocols, each one possessing a different motion for its support. We considered several types of inputs, including among others two kinds of motion patterns (random and adversarial) for the mobile hosts not in the support. Our experiments showed that for all protocols the throughput scales almost linearly with the number of mobile hosts in the network, and that a small support size suffices for efficient communication. An interesting outcome is that in most cases the new protocol is inferior to the other two, although it has a global knowledge of the motion space.

Abstract: We propose a fair scheduling algorithm for Computational Grids, called Fair Execution Time Estimation (FETE) algorithm. FETE assigns a task to the computation resource that minimizes what we call its fair execution time estimation. The fair execution time of a task on a certain resource is an estimation of the time by which a task will be executed on the resource, assuming it gets a fair share of the resource’s computational power. Though space-shared scheduling is used in practice, the estimates of the fair execution times are obtained assuming that a time-sharing discipline is used. We experimentally evaluate the proposed algorithm and observe that it outperforms other known scheduling algorithms. We also propose a version of FETE, called Simple FETE (SFETE), which requires no a-priori knowledge of the tasks workload and in most cases has similar performance to that of FETE.

Abstract: We investigate the impact of multiple, mobile sinks on
efficient data collection in wireless sensor networks. To
improve performance, our protocol design focuses on minimizing
overlaps of sink trajectories and balancing the service load
among the sinks. To cope with high network dynamics, placement
irregularities and limited network knowledge we propose three different
protocols: a) a centralized one, that explicitly equalizes spatial coverage;
this protocol assumes strong modeling assumptions, and also serves as a kind
of performance lower bound in uniform networks of low dynamics b)
a distributed protocol based on mutual avoidance of sinks c) a clustering
protocol that distributively groups service areas towards balancing the load per sink.
Our simulation findings demonstrate significant gains in latency, while keeping the success
rate and the energy dissipation at very satisfactory levels even under
high network dynamics and deployment heterogeneity.

Abstract: ManyWSN algorithms and applications are based on knowledge
regarding the position of nodes inside the network area.
However, the solution of using GPS based modules in order
to perform localization in WSNs is a rather expensive solution
and in the case of indoor applications, such as smart
buildings, is also not applicable. Therefore, several techniques
have been studied in order to perform relative localization
in WSNs; that is, to compute the position of
a node inside the network area relatively to the position
of other nodes. Many such techniques are based on indicators
like the Radio Signal Strength Indicator (RSSI)
and the Link Quality Indicator (LQI). These techniques are
based on the assumption that there is strong correlation between
the Euclidian distance of the communicating motes
and these indicators. Therefore, high values of RSSI and
LQI should indicate physical proximity of two communicating
nodes. However, these indicators do not depend solely on
distance. Physical obstacles, ambient electromagnetic noise
and interferences from other wireless transmissions also affect
the quality of wireless communication in a stochastic
way. In this paper we propose, implement, experimentally
fine tune and evaluate a localization algorithm that exploits
the stochastic nature of interferences during wireless communications
in order to perform localization in WSNs. Our
algorithm is particularly designed for in-door localisation of
moving people in smart buildings. The localisation achieved
is fine-grained, i.e. the position of the target mote is successfully
computed with approximately one meter accuracy.
This fine-grained localisation can be used by smart Building
Management Systems in many applications such as room
adaptation to presence. In our scenario, our proposed algorithm is used by a smart room in order to localise the
position of people inside the room and adapt room illumination
accordingly.

Abstract: Motivated by the problem of allocating optical bandwidth in tree–shaped WDM networks, we study the fractional path coloring problem in trees. We consider the class of locally-symmetric sets of paths on binary trees and prove that any such set of paths has a fractional coloring of cost at most 1.367L, where L denotes the load of the set of paths. Using this result, we obtain a randomized algorithm that colors any locally-symmetric set of paths of load L on a binary tree (with reasonable restrictions on its depth) using at most 1.367L+o(L) colors, with high probability.

Abstract: We discuss two different ways of having fun with two different kinds of games: On the one hand, we present a framework for developing multiplayer pervasive games that rely on the use of mobile sensor networks. On the other hand, we show how to exploit game theoretic concepts in order to study the graph-theoretic problem of vertex coloring.

Abstract: In this paper we present the design of a simulator platform called FUSE (Fast Universal Simulator Engine). The term Universal means that the Engine can be adapted easily to different domains and be used for varying simulation needs, although our main target is simulation of distributed algorithms in distributed computing environments. The Engine is Fast in the sense that the simulation overhead is minimal and very large systems can be simulated. We discuss the architecture and the design decisions that form the basis of these features. We also describe the functionality that is provided to its users (e.g., monitoring, statistics, etc.).

Abstract: In this work we experimentally study the min order Radiocoloring problem (RCP) on Chordal, Split and Permutation graphs, which are three basic families of perfect graphs. This problem asks to find an assignment using the minimum number of colors to the vertices of a given graph G, so that each pair of vertices which are at distance at most two apart in G have different colors. RCP is an NP-Complete problem on chordal and split graphs [4]. For each of the three families, there are upper bounds or/and approximation algorithms known for minimum number of colors needed to radiocolor such a graph [4,10].
We design and implement radiocoloring heuristics for graphs of above families, which are based on the greedy heuristic. Also, for each one of the above families, we investigate whether there exists graph instances requiring a number of colors in order to be radiocolored, close to the best known upper bound for the family. Towards this goal, we present a number generators that produce graphs of the above families that require either (i) a large number of colors (compared to the best upper bound), in order to be radiocolored, called ldquoextremalrdquo graphs or (ii) a small number of colors, called ldquonon-extremalrdquoinstances. The experimental evaluation showed that random generated graph instances are in the most of the cases ldquonon-extremalrdquo graphs. Also, that greedy like heuristics performs very well in the most of the cases, especially for ldquonon-extremalrdquo graphs.

Abstract: The technological as well as software advances in
microelectronics and embedded component design have led to the
development of low cost, small-sized devices capable of forming
wireless, ad-hoc networks and sensing a number of qualities of
their environment, while performing computations that depend
on the sensed qualities as well as information received by their
peers. These sensor networks rely on the collective power of
the separate devices as well as their computational and sensing
capabilities to understand "global" environmental states through
locally sampled information and local sensor interactions. Due
to the locality of the sensor networks, that naturally arises due
to the locality of their communications capabilities, a number
of interesting connections exist between these networks and
geometrical concepts and problems. In this paper we study two
simple problems that pertain to the formation of low power
and low interference communication patterns in fixed topology
sensor networks. We study the problem of using multihop
communication links instead of direct ones as well as the problem
of forming a communication ring of sensor networks so as to
reduce power consumption as well as interference from other
nodes. Our focus is on the connection between sensor networks
and geometrical concepts, rather than on practicality, so as to
highlight their interrelationship.

Abstract: We study geometric versions of the min-size k-clustering
problem, a clustering problem which generalizes clustering to minimize
the sum of cluster radii and has important applications. We prove that
the problem can be solved in polynomial time when the points to be clus-
tered are located on a line. For Euclidean spaces of higher dimensions,
we show that the problem is NP-hard and present polynomial time ap-
proximation schemes. The latter result yields an improved approximation
algorithm for the related problem of k-clustering to minimize the sum of
cluster diameters.

Abstract: A fundamental approach in finding efficiently best routes or optimal itineraries in traffic information
systems is to reduce the search space (part of graph visited) of the most commonly used
shortest path routine (Dijkstra¢s algorithm) on a suitably defined graph. We investigate reduction
of the search space while simultaneously retaining data structures, created during a preprocessing
phase, of size linear (i.e., optimal) to the size of the graph. We show that the search space of
Dijkstra¢s algorithm can be significantly reduced by extracting geometric information from a given
layout of the graph and by encapsulating precomputed shortest-path information in resulted geometric
objects (containers). We present an extensive experimental study comparing the impact of
different types of geometric containers using test data from real-world traffic networks. We also
present new algorithms as well as an empirical study for the dynamic case of this problem, where
edge weights are subject to change and the geometric containers have to be updated and show that
our new methods are two to three times faster than recomputing everything from scratch. Finally,
in an appendix, we discuss the software framework that we developed to realize the implementations
of all of our variants of Dijkstra¢s algorithm. Such a framework is not trivial to achieve as our
goal was to maintain a common code base that is, at the same time, small, efficient, and flexible,
as we wanted to enhance and combine several variants in any possible way.

Abstract: Building efficient internet-scale data management services is the main focus of this chapter. In particular, we aim to show how to leverage DHT technology and extend it with novel algorithms and architectures in order to (i) improve efficiency and reliability for traditional DHT (exact-match) queries, particularly exploiting the abundance of altruism witnessed in real-life P2P networks, (ii) speedup range queries for data stored on DHTs, and (iii) support efficiently and scalably the publish/subscribe paradigm over DHTs, which crucially depends on algorithms for supporting rich queries on string-attribute data.

Abstract: Information retrieval (IR) in peer-to-peer (P2P) networks,
where the corpus is spread across many loosely coupled
peers, has recently gained importance. In contrast to IR
systems on a centralized server or server farm, P2P IR faces
the additional challenge of either being oblivious to global
corpus statistics or having to compute the global measures
from local statistics at the individual peers in an efficient,
distributed manner. One specific measure of interest is the
global document frequency for different terms, which would
be very beneficial as term-specific weights in the scoring and
ranking of merged search results that have been obtained
from different peers.
This paper presents an efficient solution for the problem
of estimating global document frequencies in a large-scale
P2P network with very high dynamics where peers can join
and leave the network on short notice. In particular, the
developed method takes into account the fact that the lo-
cal document collections of autonomous peers may arbitrar-
ily overlap, so that global counting needs to be duplicate-
insensitive. The method is based on hash sketches as a
technique for compact data synopses. Experimental stud-
ies demonstrate the estimator?s accuracy, scalability, and
ability to cope with high dynamics. Moreover, the benefit
for ranking P2P search results is shown by experiments with
real-world Web data and queries.

Abstract: The Internet of Things is shaping up to be the ideal vehicle for introducing pervasive computing in our everyday lives, especially in the form of smart home and building management systems. However, although such technologies are gradually becoming more mainstream, there is still a lot of ground to be covered with respect to public buildings and specifically ones in the educational sector. We discuss here \Green Mindset", an action focusing on energy efficiency and
sustainability in Greek public schools. A large-scale sensor infrastructure has been deployed to 12 public school buildings across diverse settings. We report on the overall design and implementation of the system, as well as on some first results coming from the data produced. Our system provides a flexible and efficient basis for realizing a unified approach to monitoring energy consumption and environmental parameters,
that can be used both for building administration
and educational purposes.

Abstract: One of the major problems algorithm designers usually face is to know in advance whether a proposed optimization algorithm is going to behave as planned, and if not, what changes are to be made to the way new solutions are examined so that the algorithm performs nicely. In this work we develop a methodology for differentiating good neighborhoods from bad ones. As a case study we consider the structure of the space of assignments for random 3-SAT formulas and we compare two neighborhoods, a simple and a more refined one that we already know the corresponding algorithm behaves extremely well. We give evidence that it is possible to tell in advance what neighborhood structure will give rise to a good search algorithm and we show how our methodology could have been used to discover some recent results on the structure of the SAT space of solutions. We use as a tool Go with the winners, an optimization heuristic that uses many particles that independently search the space of all possible solutions. By gathering statistics, we compare the combinatorial characteristics of the different neighborhoods and we show that there are certain features that make a neighborhood better than another, thus giving rise to good search algorithms.

Abstract: Human mobility monitoring and respective traces are important for understanding human behavior, respective patterns and associated context. Such data can be potentially used in business intelligence-oriented systems, for providing added value commercial services or insight to internal enterprise procedures. At the same time, smartphones are rapidly becoming an indispensable tool for our everyday life, while their advanced networking and computing capabilities are increasingly being used as enablers for new applications. We discuss here a system using a stable computing and networking infrastructure along with smartphone applications, based on commodity technologies, meant to be deployed rapidly and provide analytics almost in real-time for such aspects. We also discuss a related scenario in order to provide insight as to where our system could be used. We briefly present the deployment of our system in two settings, an office building and a research exhibition event, along with our experiences. Our findings show that it is feasible and efficient to deploy and operate our system relatively easy, producing meaningful data.

Abstract: In this paper we investigate the construction of efficient secret sharing schemes by using a technique called hypergraph decomposition, extending in a non-trivial way the previously studied graph decomposition technique. A major advantage advantage of hypergraph decomposition is that it applies to any access structure, rather than only structures representable as graphs. As a consequence we obtain secret sharing schemes for several classes of access structures with improved efficiency over previous results. We also obtain an elementary characterization of the ideal access structures among the hyperstars, which is of independent interest.

Abstract: Dynamic graph algorithms have been extensively studied in the last two
decades due to their wide applicabilityin manycon texts. Recently, several
implementations and experimental studies have been conducted investigating
the practical merits of fundamental techniques and algorithms. In most
cases, these algorithms required sophisticated engineering and fine-tuning
to be turned into efficient implementations. In this paper, we surveysev -
eral implementations along with their experimental studies for dynamic
problems on undirected and directed graphs. The former case includes
dynamic connectivity, dynamic minimum spanning trees, and the sparsification
technique. The latter case includes dynamic transitive closure and
dynamic shortest paths. We also discuss the design and implementation of
a software libraryfor dynamic graph algorithms.

Abstract: We present improved methods for computing a set of alternative source-to-destination routes
in road networks in the form of an alternative graph. The resulting alternative graphs are
characterized by minimum path overlap, small stretch factor, as well as low size and complexity.
Our approach improves upon a previous one by introducing a new pruning stage preceding any
other heuristic method and by introducing a new filtering and fine-tuning of two existing methods.
Our accompanying experimental study shows that the entire alternative graph can be computed
pretty fast even in continental size networks.

Abstract: Collection selection has been a research issue for years. Typically,
in related work, precomputed statistics are employed
in order to estimate the expected result quality of each collection,
and subsequently the collections are ranked accordingly.
Our thesis is that this simple approach is insufficient
for several applications in which the collections typically
overlap. This is the case, for example, for the collections
built by autonomous peers crawling the web. We
argue for the extension of existing quality measures using
estimators of mutual overlap among collections and present
experiments in which this combination outperforms CORI,
a popular approach based on quality estimation. We outline
our prototype implementation of a P2P web search engine,
coined MINERVA1, that allows handling large amounts of
data in a distributed and self-organizing manner. We conduct
experiments which show that taking overlap into account
during collection selection can drastically decrease the
number of collections that have to be contacted in order to
reach a satisfactory level of recall, which is a great step toward
the feasibility of distributed web search.

Abstract: The problem of robust line planning requests for a set of
origin-destination paths (lines) along with their frequencies in an underlying
railway network infrastructure, which are robust to
uctuations of
real-time parameters of the solution.
In this work, we investigate a variant of robust line planning stemming
from recent regulations in the railway sector that introduce competition
and free railway markets, and set up a new application scenario: there is
a (potentially large) number of line operators that have their lines xed
and operate as competing entities struggling to exploit the underlying
network infrastructure via frequency requests, while the management of
the infrastructure itself remains the responsibility of a single (typically
governmental) entity, the network operator.
The line operators are typically unwilling to reveal their true incentives.
Nevertheless, the network operator would like to ensure a fair (or, socially
optimal) usage of the infrastructure, e.g., by maximizing the (unknown to
him) aggregate incentives of the line operators. We show that this can be
accomplished in certain situations via a (possibly anonymous) incentivecompatible
pricing scheme for the usage of the shared resources, that is
robust against the unknown incentives and the changes in the demands
of the entities. This brings up a new notion of robustness, which we
call incentive-compatible robustness, that considers as robustness of the
system its tolerance to the entities' unknown incentives and elasticity
of demands, aiming at an eventual stabilization to an equilibrium point
that is as close as possible to the social optimum.

Abstract: We present a unique location-based application of sensor networks. The ``Magnetize Words'' installation produces word clouds following visitors around, interacting with each other as the visitors meet in physical space. The installation can be based on either of two localization-providing platforms. The first achieves high localization accuracy while requiring visitors to carry identifying tags. The other provides low accuracy, but is fully nonintrusive.

Abstract: With this work we aim to make a three-fold contribution.
We first address the issue of supporting efficiently queries
over string-attributes involving prefix, suffix, containment,
and equality operators in large-scale data networks. Our
first design decision is to employ distributed hash tables
(DHTs) for the data network?s topology, harnessing their
desirable properties. Our next design decision is to derive
DHT-independent solutions, treating DHT as a black box.
Second, we exploit this infrastructure to develop efficient
content based publish/subscribe systems. The main con-
tribution here are algorithms for the efficient processing of
queries (subscriptions) and events (publications). Specifi-
cally, we show that our subscription processing algorithms
require O(logN) messages for a N-node network, and our
event processing algorithms require O(l ? logN) messages
(with l being the average string length).
Third, we develop algorithms for optimizing the proces-
sing of multi-dimensional events, involving several string at-
tributes. Further to our analysis, we provide simulation-
based experiments showing promising performance results
in terms of number of messages, required bandwidth, load
balancing, and response times.

Abstract: Information retrieval (IR) in peer-to-peer (P2P) networks,
where the corpus is spread across many loosely coupled
peers, has recently gained importance. In contrast to IR
systems on a centralized server or server farm, P2P IR faces
the additional challenge of either being oblivious to global
corpus statistics or having to compute the global measures
from local statistics at the individual peers in an efficient,
distributed manner. One specific measure of interest is the
global document frequency for different terms, which would
be very beneficial as term-specific weights in the scoring and
ranking of merged search results that have been obtained
from different peers.
This paper presents an efficient solution for the problem
of estimating global document frequencies in a large-scale
P2P network with very high dynamics where peers can join
and leave the network on short notice. In particular, the
developed method takes into account the fact that the lo-
cal document collections of autonomous peers may arbitrar-
ily overlap, so that global counting needs to be duplicate-
insensitive. The method is based on hash sketches as a
technique for compact data synopses. Experimental stud-
ies demonstrate the estimator?s accuracy, scalability, and
ability to cope with high dynamics. Moreover, the benefit
for ranking P2P search results is shown by experiments with
real-world Web data and queries.

Abstract: In this work we discuss Kafebook, a system combining popular social networking platforms
with activity input from the physical domain inside public spaces. The system is envisioned as
a means for users to augment and communicate their activities to other people in their physical
proximity through a public display, while catering for anonymity issues. We developed a
client for Android smartphones that is used as the user interface and the enabling platform,
with which users connect to the system infrastructure and interact with it. Apart from
providing access to input from social networking sites, the Android client allows for chat
between the users and music selection polls. Wireless networking is based on Bluetooth, a
widespread technology in smartphones, which enables a more pervasive mode of operation,
while utilizing it also as a proximity sensor. We deployed our system in a two-day public
event as part of an undergraduate theses showcase, receiving positive feedback from visitors.

Abstract: This paper addresses the efficient processing of
top-k queries in wide-area distributed data
repositories where the index lists for the attribute
values (or text terms) of a query are distributed
across a number of data peers and the
computational costs include network latency,
bandwidth consumption, and local peer work.
We present KLEE, a novel algorithmic
framework for distributed top-k queries,
designed for high performance and flexibility.
KLEE makes a strong case for approximate top-k
algorithms over widely distributed data sources.
It shows how great gains in efficiency can be
enjoyed at low result-quality penalties. Further,
KLEE affords the query-initiating peer the
flexibility to trade-off result quality and expected
performance and to trade-off the number of
communication phases engaged during query
execution versus network bandwidth
performance. We have implemented KLEE and
related algorithms and conducted a
comprehensive performance evaluation. Our
evaluation employed real-world and synthetic
large, web-data collections, and query
benchmarks. Our experimental results show that
KLEE can achieve major performance gains in
terms of network bandwidth, query response
times, and much lighter peer loads, all with small
errors in result precision and other result-quality
measures

Abstract: Abstract— Numerous smart city testbeds and system deployments have surfaced around the world, aiming to provide services over unified large heterogeneous IoT infrastructures. Although we have achieved new scales in smart city installations and systems, so far the focus has been to provide diverse sources of data to smart city services consumers, while neglecting to provide ways to simplify making good use of them. We believe that knowledge creation in smart cities through data annotation, supported in both an automated and a crowdsourced manner, is an aspect that will bring additional value to smart cities. We present here our approach, aiming to utilize an existing smart city deployment and the OrganiCity software ecosystem. We discuss key challenges along with characteristic use cases, and report on our design and implementation, along with preliminary results.

Abstract: A new social form of play taking place in public spaces, such as city parks, squares and streets that can scale up to involve large number of participants and engage players located simultaneously in dispersed areas is discussed here. This paper presents a platform that allows for the creation of interactive installations in public spaces. The FinN platform provides functionality for detecting and localizing people, by requiring players to carry identifying devices that offer many possibilities for gesture interaction (i.e., gestures such as shaking, facilitated by the sensor-bouquet of the devices). One of the deployed example installations, where visitors can interact with words formations, is presented here as a case study.

Abstract: Although we have reached new levels in smart city installations and systems, efforts so far have focused on providing diverse sources of data to smart city services consumers while neglecting to provide ways to simplify making good use of them. In this context, one first step that will bring added value to smart cities is knowledge creation in smart cities through anomaly detection and data annotation, supported in both an automated and a crowdsourced manner. We present here LearningCity, our solution that has been validated over an existing smart city deployment in Santander, and the OrganiCity experimentation-as-a-service ecosystem. We discuss key challenges along with characteristic use cases, and report on our design and implementation, together with some preliminary results derived from combining large smart city datasets with machine learning.

Abstract: In this paper we study deliberate attacks on the infrastructure of large scale-free networks. These attacks are based on the importance of individual vertices in the network in order to be successful, and the concept of centrality (originating from social science) has already been utilized in their study with success. Some measures of centrality however, as betweenness, have disadvantages that do not facilitate the research in this area. We show that with the aid of scale-free network characteristics such as the clustering coefficient we can get results that balance the current centrality measures, but also gain insight into the workings of these networks.

Abstract: We address the issue of measuring storage, or query load distribution fairness in peer-to-peer data management systems. Existing metrics may look promising from the point of view of specific peers, while in reality being far from optimal from a global perspective. Thus, first we define the requirements and study the appropriateness of various statistical metrics for measuring load distribution fairness towards these requirements. The metric proposed as most appropriate is the Gini coefficient (G). Second, we develop novel distributed sampling algorithms to compute G on-line, with high precision, efficiently, and scalably. Third, we show how G can readily be utilized on-line by higher-level algorithms which can now know when to best intervene to correct load imbalances. Our analysis and experiments testify for the efficiency and accuracy of these algorithms, permitting the online use of a rich and reliable metric, conveying a global perspective of the distribution.

Abstract: We consider the problem of searching for a piece of
information in a fully interconnected computer network
(also called a complete network or clique) by exploiting
advice about its location from the network nodes. Each
node contains a database that ?knows? what kind of
documents or information are stored in other nodes
(e.g., a node could be a Web server that answers queries
about documents stored on the Web). The databases in
each node, when queried, provide a pointer that leads to
the node that contains the information. However, this
information is up-to-date (or correct) with some
bounded probability. While, in principle, one may always
locate the information by simply visiting the network
nodes in some prescribed ordering, this requires a time
complexity in the order of the number of nodes of the
network. In this paper, we provide algorithms for locating
an information node in the complete communication
network, which take advantage of advice given from
network nodes. The nodes may either give correct advice,
by pointing directly to the information node, or give
wrong advice, by pointing elsewhere. On the lowerbounds?
side, we show that no fixed-memory (i.e., with
memory independent of the network size) deterministic
algorithm may locate the information node in a constant
(independent of the network size) expected number of
steps. Moreover, if p (1/n) is the probability that a
node of an n-node clique gives correct advice, we show
that no algorithm may locate the information node in an
expected number of steps less than 1/p o(1). To study
how the expected number of steps is affected by the
amount of memory allowed to the algorithms, we give a
memoryless randomized algorithm with expected number
of steps 4/p o(1/p) o(1) and a 1-bit randomized
algorithm requiring on the average at most 2/p o(1)
steps. In addition, in the memoryless case, we also
prove a 4/p lower bound for the expected number of
steps in the case where the nodes giving faulty advice
may decide on the content of this advice in any possible
way and not merely at random (adversarial fault model).
Finally, for the case where faulty nodes behave randomly,
we give an optimal, unlimited memory deterministic
algorithm with expected number of steps bounded
from above by 1/p o(1/p) 1.

Abstract: We consider the problem of searching for a piece of information in a fully interconnected computer network or clique by exploiting
advice about its location from the network nodes Each node contains a
database that knows what kind of documents or information are stored
in other nodes e.g. a node could be a Web server that answers queries
about documents stored on the Web. The databases in each node when
queried provide a pointer that leads to the node that contains the information. However this information is up to date or correct with some
bounded probability. While in principle one may always locate the information by simply visiting the network nodes in some prescribed ordering
this requires a time complexity in the order of the number of nodes of the
network. In this paper we provide algorithms for locating an information node in the complete communication network that take advantage
of advice given from network nodes The nodes may either give correct
advice by pointing directly to the information node or give wrong advice
by pointing elsewhere We show that on the averageif the probability that a node provides correct advice is asymptotically larger than
where is the number of the computer nodes then the average time complexity for locating the information node is asymptotically or depending on the available memory.The probability may in general be a function of the number of network nodes . On the lower bounds
side we prove that noxed memory deterministic algorithm can locate
the information node in nite expected number of steps. We also prove
a lower bound of
for the expected number of steps of any algorithm
that locates the information node in the complete network.

Abstract: We briefly present the design and architecture of a system that aims to simplify the process of organizing, executing and administering crowdsensing campaigns in a smart city context over smartphones volunteered by citizens. We built our system on top of an Android app substrate on the end-user level, which enables us to utilize smartphone resources. Our system allows researchers and other developers to manage and distribute their “mini” smart city applications, gather data and publish their results through the Organicity smart city platform. We believe this is the first time such a tool is paired with a large scale IoT
infrastructure, to enable truly city-scale IoT and smart city experimentation.

Abstract: Flow control is the main technique currently used to prevent some of the ordered traffic from entering a communication network, and to avoid congestion. A challenging aspect of flow control is how to treat all sessions "fairly " when it is necessary to turn traffic away from the network. In this work, we show how to extend the theory of max-min fair flow control to the case where priorities are assigned to different varieties of traffic, which are sensitive to traffic levels. We examine priorities expressible in the general form of increasing functions of rates, considering yet in combination the more elaborative case with unescapable upper and lower bounds on rates of traffic sessions. We offer optimal, priority bottleneck algorithms, which iteratively adjust the session rates in order to meet a new condition of max-min fairness under priorities and rate bounds. In our setting, which is realistic for today's technology of guaranteed quality of service, traffic may be turned away not only to avoid congestion, but also to respect particular minimum requirements on bandwidth. Moreover, we establish lower bounds on the competitiveness of network-oblivious schemes compared to optimal schemes with complete knowledge of network structure. Our theory extends significantly the classical theory of max-min fair flow control [2]. Moreover, our results on rejected traffic are fundamentally different from those related to call control and bandwidth allocation, since not only do we wish to optimize the number and rates of accepted sessions, but we also require priority fairness.

Abstract: The promises inherent in users coming together to form data
sharing network communities, bring to the foreground new problems formulated
over such dynamic, ever growing, computing, storage, and networking
infrastructures. A key open challenge is to harness these highly
distributed resources toward the development of an ultra scalable, efficient
search engine. From a technical viewpoint, any acceptable solution
must fully exploit all available resources dictating the removal of any
centralized points of control, which can also readily lead to performance
bottlenecks and reliability/availability problems. Equally importantly,
however, a highly distributed solution can also facilitate pluralism in informing
users about internet content, which is crucial in order to preclude
the formation of information-resource monopolies and the biased visibility
of content from economically-powerful sources. To meet these challenges,
the work described here puts forward MINERVA{\^a}{\"i}¿½{\"i}¿½, a novel search
engine architecture, designed for scalability and efficiency. MINERVA{\^a}{\"i}¿½{\"i}¿½
encompasses a suite of novel algorithms, including algorithms for creating
data networks of interest, placing data on network nodes, load balancing,
top-k algorithms for retrieving data at query time, and replication algorithms
for expediting top-k query processing. We have implemented the
proposed architecture and we report on our extensive experiments with
real-world, web-crawled, and synthetic data and queries, showcasing the
scalability and efficiency traits of MINERVA{\^a}{\"i}¿½{\"i}¿½.

Abstract: Recommender Systems (RSs) have been extensively utilized as a means of reducing the information overload and offering travel recommendations to tourists. The emerging mobile RSs are tailored to mobile device users and promise to substantially enrich tourist experiences, recommending rich multimedia content, context-aware services, views/ratings of peer users, etc. New developments in mobile computing, wireless networking, web technologies and social networking leverage massive opportunities to provide highly accurate and effective tourist recommendations that respect personal preferences and capture usage, personal, social and environmental contextual parameters. This article follows a systematic approach in reviewing the state-of-the-art in the field, proposing a classification of mobile tourism RSs and providing insights on their offered services. It also highlights challenges and promising research directions with respect to mobile RSs employed in tourism.

Abstract: Recently, an interest has arisen for network worms that propagate using Domain Name Servers (DNS) in order to discover victim hosts.These worms generate random strings, as possible network domain names, and then query Domain Name Servers in order to discover the corresponding IP addresses. In this paper we present models for the dynamics of the co-evolution of worm agents in the presence of anti-worm agents that move in the network in order to stop worm propagation. The proposed models consider anti-worm agents who know the network and anti-worm agents that do not know it and need to issue queries in order to discover valid IP addresses. We,further, introduce "honeypot'' domain name servers that attempt to lure worms, introducing only a delay and providing no answer.We show that by simply delaying the response to DNS queries issued by the worm has little positive effect on the worms propagation.

Abstract: In this work, we explore context-aware application scenarios that become possible utilizing semantically-rich information derived from real-world mobility and presence traces. Traces produced by people carrying personal mobile devices, capturing social and contextual interactions, serve as enables for Future Internet applications. We discuss the fundamental concepts, technical issues and related research challenges. We propose a reference architecture for setting up a system that collects such traces in a Smart City environment. We present the algorithms used to process the traces and infer interactions and interests for the observed populations. We conduct two 3-day trial deployments: one in an office environment and the other in the context of a Smart Conference application. We discuss our findings regarding the system's capability to track interactions and the overall efficacy of the application.

Abstract: In this work, we showcase a set of implemented multiplayer games and interactive installations based on Fun in Numbers (FINN). FinN allows the quick prototyping of applications that utilize input from multiple physical sources (sensors and other means of interfacing), by offering a set of programming templates and services, such as proximity, localization and synchronization, that hide the underlying complexity.

Abstract: Evolutionary dynamics have been traditionally studied in the context of homogeneous populations, mainly described by the Moran process [15]. Recently, this approach has been generalized in [13] by arranging individuals on the nodes of a network (in general, directed). In this setting, the existence of directed arcs enables the simulation of extreme phenomena, where the fixation probability of a randomly placed mutant (i.e. the probability that the offsprings of the mutant eventually spread over the whole population) is arbitrarily small or large. On the other hand, undirected networks (i.e. undirected graphs) seem to have a smoother behavior, and thus it is more challenging to find suppressors/amplifiers of selection, that is, graphs with smaller/greater fixation probability than the complete graph (i.e. the homogeneous population). In this paper we focus on undirected graphs. We present the first class of undirected graphs which act as suppressors of selection, by achieving a fixation probability that is at most one half of that of the complete graph, as the number of vertices increases. Moreover, we provide some generic upper and lower bounds for the fixation
probability of general undirected graphs. As our main contribution, we introduce the natural alternative of the model proposed in [13]. In our new evolutionary model, all individuals interact simultaneously and the result is a compromise between aggressive and non-aggressive individuals. That is, the behavior of the individuals in our new model and in the model of [13] can be interpreted as an “aggregation” vs. an “all-or-nothing” strategy, respectively. We prove that our new model of mutual influences admits a potential function, which guarantees the convergence of the system for any graph topology and any initial fitness vector of the individuals. Furthermore, we prove fast convergence to the stable state for the case of the complete graph, as well as we provide almost tight bounds on the limit fitness of the individuals. Apart from being important on its own, this new evolutionary model appears to be useful also in the abstract modeling of control mechanisms over invading populations in networks. We demonstrate this by introducing and analyzing two alternative control approaches, for which we bound the time needed to stabilize to the “healthy” state of the system.

Abstract: Geographic routing scales well in sensor networks, mainly
due to its stateless nature. Still, most of the algorithms are
concerned with finding some path, while the optimality of
the path is difficult to achieve. In this paper we are presenting
a novel geographic routing algorithm with obstacle
avoidance properties. It aims at finding the optimal path
from a source to a destination when some areas of the network
are unavailable for routing due to low local density or
obstacle presence. It locally and gradually with time (but,
as we show, quite fast) evaluates and updates the suitability
of the previously used paths and ignores non optimal paths
for further routing. By means of extensive simulations, we
are comparing its performance to existing state of the art
protocols, showing that it performs much better in terms of
path length thus minimizing latency, space, overall traffic
and energy consumption.

Abstract: We propose new burst assembly schemes and fast reservation (FR) protocols for Optical Burst Switched (OBS) networks that are based on traffic prediction. The burst assembly schemes aim at minimizing (for a given burst size) the average delay of the packets incurred during the burst assembly process, while the fast reservation protocols aim at further reducing the end-to-end delay of the data bursts. The burst assembly techniques use a linear prediction filter to estimate the number of packet arrivals at the ingress node in the following interval, and launch a new burst into the network when a certain criterion, different for each proposed scheme, is met. The fast reservation protocols use prediction filters to estimate the expected length of the burst and the time needed for the burst assembly process to complete. A Burst Header Packet (BHP) packet carrying these estimates is sent before the burst is completed, in order to reserve bandwidth at intermediate nodes for the time interval the burst is expected to pass from these nodes. Reducing the packet aggregation delay and the time required to perform the reservations, reduces the total time needed for a packet to be transported over an OBS network and is especially important for real-time applications. We evaluate the performance of the proposed burst assembly schemes and show that a number of them outperform the previously proposed timer-based, length-based and average delay-based burst assembly schemes. We also look at the performance of the fast reservation (FR) protocols in terms of the probability of successfully establishing the reservations required to transport the burst.

Abstract: We propose new burst assembly techniques that aim at reducing the average delay experienced by the packets during the burstification process in optical burst switched (OBS) networks, for a given average size of the bursts produced. These techniques use a linear prediction filter to estimate the number of packet arrivals at the ingress node in the following interval, and launch a new burst into the network when a certain criterion, which is different for each proposed scheme, is met. Reducing the packet burstification delay, for a given average burst size, is essential for real-time applications; correspondingly, increasing the average burst size for a given packet burstification delay is important for reducing the number of bursts injected into the network and the associated overhead imposed on the core nodes. We evaluate the performance of the proposed schemes and show that two of them outperform the previously proposed timer - based, length - based and average delay-based burst aggregation schemes in terms of the average packet burstification delay for a given average burst size.

Abstract: Wireless Sensor Networks (WSNs) constitute a recent and promising new
technology that is widely applicable. Due to the applicability of this
technology and its obvious importance for the modern distributed
computational world, the formal scientific foundation of its inherent laws
becomes essential. As a result, many new computational models for WSNs
have been proposed. Population Protocols (PPs) are a special category of
such systems. These are mainly identified by three distinctive
characteristics: the sensor nodes (agents) move passively, that is, they
cannot control the underlying mobility pattern, the available memory to
each agent is restricted, and the agents interact in pairs. It has been
proven that a predicate is computable by the PP model iff it is
semilinear. The class of semilinear predicates is a fairly small class. In
this work, our basic goal is to enhance the PP model in order to improve
the computational power. We first make the assumption that not only the
nodes but also the edges of the communication graph can store restricted
states. In a complete graph of n nodes it is like having added O(n2)
additional memory cells which are only read and written by the endpoints
of the corresponding edge. We prove that the new model, called Mediated
Population Protocol model, can operate as a distributed nondeterministic
Turing machine (TM) that uses all the available memory. The only
difference from a usual TM is that this one computes only symmetric
languages. More formally, we establish that a predicate is computable by
the new model iff it is symmetric and belongs to NSPACE(n2). Moreover, we
study the ability of the new model to decide graph languages (for general
graphs). The next step is to ignore the states of the edges and provide
another enhancement straight away from the PP model. The assumption now is
that the agents are multitape TMs equipped with infinite memory, that can
perform internal computation and interact with other agents, and we define
space-bounded computations. We call this the Passively mobile Machines
model. We prove that if each agent uses at most f(n) memory for f(n)={\`U}(log
n) then a predicate is computable iff it is symmetric and belongs to
NSPACE(nf(n)). We also show that this is not the case for f(n)=o(log n).
Based on these, we show that for f(n)={\`U}(log n) there exists a space
hierarchy like the one for classical symmetric TMs. We also show that the
latter is not the case for f(n)=o(loglog n), since here the corresponding
class collapses in the class of semilinear predicates and finally that for
f(n)={\`U}(loglog n) the class becomes a proper superset of semilinear
predicates. We leave open the problem of characterizing the classes for
f(n)={\`U}(loglog n) and f(n)=o(log n).

Abstract: In this paper we discussed different switch architectures. We focus mainly on optical buffering. We investigate an all-optical buffer architecture comprising of cascaded stages of quantum-dot semiconductor optical amplifier- based tunable wavelength converters, at 160 Gb/s. We also propose the optical buffer with multi-wavelength converters based on quantum-dot semiconductor optical amplifiers. We present multistage switching fabrics with optical buffers, where optical buffers are based on fibre delay lines and are located in the first stage. Finally, we describe a photonic asynchronous packet switch and show that the employment of a few optical buffer stages to complement the electronic ones significantly improves the switch performance. We also propose two asynchronous optical packet switching node architectures, where an efficient contention resolution is based on controllable optical buffers and tunable wavelength converters TWCs.

Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. The Radiocoloring (RC) of a graph G(V,E) is an assignment function Φ: V → IN such that ¦Φ(u)-Φ(v)≥ 2, when u; v are neighbors in G, and ¦Φ(u)-Φ(v)≥1 when the minimum distance of u; v in G is two. The discrete number and the range of frequencies used are called order and span, respectively. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span or the order. In this paper we prove that the min span RCP is NP-complete for planar graphs. Next, we provide an O(nΔ) time algorithm (¦V¦ = n) which obtains a radiocoloring of a planar graph G that approximates the minimum order within a ratio which tends to 2 (where Δ the maximum degree of G). Finally, we provide a fully polynomial randomized approximation scheme (fpras) for the number of valid radiocolorings of a planar graph G with λ colors, in the case λ ≥ 4λ + 50.

Abstract: A constraint network is arc consistent if any value of any of its variables is compatible with at
least one value of any other variable. The Arc Consistency Problem (ACP) consists in filtering out values of
the variables of a given network to obtain one that is arc consistent, without eliminating any solution. ACP is
known to be inherently sequential, or P-complete, so in this paper we examine some weaker versions of it and
their parallel complexity. We propose several natural approximation schemes for ACP and show that they are also
P-complete. In an attempt to overcome these negative results, we turn our attention to the problem of filtering
out values from the variables so that each value in the resulting network is compatible with at least one value of
not necessarily all, but a constant fraction of the other variables. We call such a network partially arc consistent.
We give a parallel algorithm that, for any constraint network, outputs a partially arc consistent subnetwork of it in
sublinear (O.pn log n/) parallel time using O.n2/ processors. This is the first (to our knowledge) sublinear-time
parallel algorithm with polynomially many processors that guarantees that in the resulting network every value is
compatible with at least one value in at least a constant fraction of the remaining variables. Finally, we generalize
the notion of partiality to the k-consistency problem.

Abstract: The voting rules proposed by Dodgson and Young are both
designed to nd the alternative closest to being a Condorcet
winner, according to two dierent notions of proximity; the
score of a given alternative is known to be hard to compute
under either rule.
In this paper, we put forward two algorithms for ap-
proximating the Dodgson score: an LP-based randomized
rounding algorithm and a deterministic greedy algorithm,
both of which yield an O(logm) approximation ratio, where
m is the number of alternatives; we observe that this result
is asymptotically optimal, and further prove that our greedy
algorithm is optimal up to a factor of 2, unless problems in
NP have quasi-polynomial time algorithms. Although the
greedy algorithm is computationally superior, we argue that
the randomized rounding algorithm has an advantage from
a social choice point of view.
Further, we demonstrate that computing any reasonable
approximation of the ranking produced by Dodgson's rule
is NP-hard. This result provides a complexity-theoretic
explanation of sharp discrepancies that have been observed
in the Social Choice Theory literature when comparing
Dodgson elections with simpler voting rules.
Finally, we show that the problem of calculating the
Young score is NP-hard to approximate by any factor. This
leads to an inapproximability result for the Young ranking.

Abstract: It was only recently shown by Shi and Wormald, using the differential equation method to analyze an appropriate algorithm, that a random 5-regular graph asymptotically almost surely has chromatic number at most 4. Here, we show that the chromatic number of a random 5-regular graph is asymptotically almost surely equal to 3, provided a certain four-variable function has a unique maximum at a given point in
a bounded domain. We also describe extensive numerical evidence that strongly suggests that the latter condition holds. The proof applies the small subgraph conditioning method to the number of locally rainbow balanced 3-colorings, where a coloring is balanced if the number of vertices of each color is equal, and locally rainbow if every vertex is adjacent to at least one vertex of each of the other colors.

Abstract: We consider the generation of prime-order elliptic curves (ECs) over a prime field $\mathbb{F}_{p}$ using the Complex Multiplication (CM) method. A crucial step of this method is to compute the roots of a special type of class field polynomials with the most commonly used being the Hilbert and Weber ones. These polynomials are uniquely determined by the CM discriminant D. In this paper, we consider a variant of the CM method for constructing elliptic curves (ECs) of prime order using Weber polynomials. In attempting to construct prime-order ECs using Weber polynomials, two difficulties arise (in addition to the necessary transformations of the roots of such polynomials to those of their Hilbert counterparts). The first one is that the requirement of prime order necessitates that D≡3mod8), which gives Weber polynomials with degree three times larger than the degree of their corresponding Hilbert polynomials (a fact that could affect efficiency). The second difficulty is that these Weber polynomials do not have roots in $\mathbb{F}_{p}$ .
In this work, we show how to overcome the above difficulties and provide efficient methods for generating ECs of prime order focusing on their support by a thorough experimental study. In particular, we show that such Weber polynomials have roots in the extension field $\mathbb{F}_{p^{3}}$ and present a set of transformations for mapping roots of Weber polynomials in $\mathbb{F}_{p^{3}}$ to roots of their corresponding Hilbert polynomials in $\mathbb{F}_{p}$ . We also show how an alternative class of polynomials, with degree equal to their corresponding Hilbert counterparts (and hence having roots in $\mathbb{F}_{p}$ ), can be used in the CM method to generate prime-order ECs. We conduct an extensive experimental study comparing the efficiency of using this alternative class against the use of the aforementioned Weber polynomials. Finally, we investigate the time efficiency of the CM variant under four different implementations of a crucial step of the variant and demonstrate the superiority of two of them.

Abstract: In routing games, the network performance at equilibrium can be significantly improved if we remove some edges from the network. This counterintuitive fact, widely known as Braess's paradox, gives rise to the (selfish) network design problem, where we seek to recognize routing games suffering from the paradox, and to improve the equilibrium performance by edge removal. In this work, we investigate the computational complexity and the approximability of the network design problem for non-atomic bottleneck routing games, where the individual cost of each player is the bottleneck cost of her path, and the social cost is the bottleneck cost of the network. We first show that bottleneck routing games do not suffer from Braess's paradox either if the network is series-parallel, or if we consider only subpath-optimal Nash flows. On the negative side, we prove that even for games with strictly increasing linear latencies, it is NP-hard not only to recognize instances suffering from the paradox, but also to distinguish between instances for which the Price of Anarchy (PoA) can decrease to 1 and instances for which the PoA is as large as \Omega(n^{0.121}) and cannot improve by edge removal. Thus, the network design problem for such games is NP-hard to approximate within a factor of O(n^{0.121-\eps}), for any constant \eps > 0. On the positive side, we show how to compute an almost optimal subnetwork w.r.t. the bottleneck cost of its worst Nash flow, when the worst Nash flow in the best subnetwork routes a non-negligible amount of flow on all used edges. The running time is determined by the total number of paths, and is quasipolynomial when the number of paths is quasipolynomial.

Abstract: For various random constraint satisfaction problems there is a significant gap between the largest constraint density
for which solutions exist and the largest density for which any polynomial time algorithm is known to find
solutions. Examples of this phenomenon include random k-SAT, random graph coloring, and a number of other
random Constraint Satisfaction Problems. To understand this gap, we study the structure of the solution space of
random k-SAT (i.e., the set of all satisfying assignments viewed as a subgraph of the Hamming cube). We prove
that for densities well below the satisfiability threshold, the solution space decomposes into an exponential number
of connected components and give quantitative bounds for the diameter, volume and number.

Abstract: For various random constraint satisfaction problems there is a significant gap between
the largest constraint density for which solutions exist and the largest density for which any polynomial
time algorithm is known to find solutions. Examples of this phenomenon include random
k-SAT, random graph coloring, and a number of other random constraint satisfaction problems. To
understand this gap, we study the structure of the solution space of random k-SAT (i.e., the set of
all satisfying assignments viewed as a subgraph of the Hamming cube). We prove that for densities
well below the satisfiability threshold, the solution space decomposes into an exponential number of
connected components and give quantitative bounds for the diameter, volume, and numb

Abstract: We study network connection games where the nodes of a networ
k perform edge swaps
in order to improve their communication costs. For the model
proposed by [2], in which the selfish
cost of a node is the sum of all shortest path distances to the o
ther nodes, we use the probabilistic
method to provide a new, structural characterization of equ
ilibrium graphs. We show how to use this
characterization in order to prove upper bounds on the diame
ter of equilibrium graphs in terms of the
size of the largest
k
-vicinity (defined as the the set of vertices within distance
k
from a vertex), for
any
k
≥
1 and in terms of the number of edges, thus settling positivel
y a conjecture of [2] in the cases
of graphs of large
k
-vicinity size (including graphs of large maximum degree) a
nd of graphs which are
dense enough.
Next, we present a new swap-based network creation game, in w
hich selfish costs depend on the imme-
diate neighborhood of each node; in particular, the profit of
a node is defined as the sum of the degrees
of its neighbors. We prove that, in contrast to the previous m
odel, this network creation game admits
an exact potential, and also that any equilibrium graph cont
ains an induced star. The existence of the
potential function is exploited in order to show that an equi
librium can be reached in expected polyno-
mial time even in the case where nodes can only acquire limite
d knowledge concerning non-neighboring
nodes.

Abstract: We study the on-line versions of two fundamental graph problems, maximum independent set and minimum coloring, for the case of disk graphs which are graphs resulting from intersections of disks on the plane. In particular, we investigate whether randomization can be used to break known lower bounds for deterministic on-line independent set algorithms and present new upper and lower bounds; we also present an improved upper bound for on-line coloring.

Abstract: We propose and evaluate an impairment-aware multi-parametric routing and wavelength assignment algorithm for online traffic in transparent optical networks. In such networks the signal quality of transmission degrades due to physical layer impairments. In the multiparametric approach, a vector of cost parameters is assigned to each link, from which the cost vectors of candidate lightpaths are calculated. In the proposed scheme the cost vector includes impairment generating source parameters, such as the path length, the number of hops, the number of crosstalk sources and other inter-lightpath interfering parameters, so as to indirectly account for the physical layer effects. For a requested connection the algorithm calculates a set of candidate lightpaths, whose quality of transmission is validated using a function that combines the impairment generating parameters. For selecting the lightpath we propose and evaluate various optimization functions that correspond to different IA-RWA algorithms. Our performance results indicate that the proposed algorithms utilize efficiently the available resources and minimize the total accumulated signal degradation on the selected lightpaths, while having low execution times.

Abstract: One oft-cited strategy towards sustainability is improving energy efficiency inside public buildings. In this context, the educational buildings sector presents a very interesting and important case for the monitoring and management of buildings, since it addresses both energy and educational issues. In this work, we present and discuss the hardware IoT infrastructure substrate that provides real-time monitoring in multiple school buildings. We believe that such a system needs to follow an open design approach: rely on hardware-agnostic components that communicate over well-defined open interfaces. We present in detail the design of our hardware components, while also providing insights to the overall system design and a first set of results on their operation. The presented hardware components are utilized as the core hardware devices for GAIA, an EU research project aimed at the educational community. As our system has been deployed and tested in several public school buildings in Greece, we also report on its validation.

Abstract: This paper studies the data gathering problem in wireless networks, where data generated at the nodes has to be collected at a single sink. We investigate the relationship between routing optimality and fair resource management. In particular, we prove that for energy balanced data propagation, Pareto optimal routing and flow maximization are equivalent, and also prove that flow maximization is equivalent to maximizing the network lifetime. We algebraically characterize the network structures in which energy balanced data flows are maximal. Moreover, we algebraically characterize communication links which are not used by an optimal flow. This leads to the characterization of minimal network structures supporting the maximal flows.
We note that energy balance, although implying global optimality, is a local property that can be computed efficiently and in a distributed manner. We suggest online distributed algorithms for energy balance in different optimal network structures and numerically show their stability in particular setting. We remark that although the results obtained in this paper have a direct consequence in energy saving for wireless networks they do not limit themselves to this type of networks neither to energy as a resource. As a matter of fact, the results are much more general and can be used for any type of network and different type of resources.

Abstract: This paper studies the data gathering problem in wireless networks, where data generated at the nodes has to be collected at a single sink. We investigate the relationship between routing optimality and fair resource management. In particular, we prove that for energy-balanced data propagation, Pareto optimal routing and flow maximization are equivalent, and also prove that flow maximization is equivalent to maximizing the network lifetime. We algebraically characterize the network structures in which energy-balanced data flows are maximal. Moreover, we algebraically characterize communication links which are not used by an optimal flow. This leads to the characterization of minimal network structures supporting the maximal flows.
We note that energy-balance, although implying global optimality, is a local property that can be computed efficiently and in a distributed manner. We suggest online distributed algorithms for energy-balance in different optimal network structures and numerically show their stability in particular setting. We remark that although the results obtained in this paper have a direct consequence in energy saving for wireless networks they do not limit themselves to this type of networks neither to energy as a resource. As a matter of fact, the results are much more general and can be used for any type of network and different types of resources.

Abstract: Network continuous-media applications are emerging with a great pace. Cache memories have long been recognized as a key resource (along with network bandwidth) whose intelligent exploitation can ensure high performance for such applications. Cache memories exist at the continuous-media servers and their proxy servers in the network. Within a server, cache memories exist in a hierarchy (at the host, the storage-devices, and at intermediate multi-device controllers). Our research is concerned with how to best exploit these resources in the context of continuous media servers and in particular, how to best exploit the available cache memories at the drive, the disk array controller, and the host levels. Our results determine under which circumstances and system configurations it is preferable to devote the available memory to traditional caching (a.k.a. ldquodata sharingrdquo) techniques as opposed to prefetching techniques. In addition, we show how to configure the available memory for optimal performance and optimal cost. Our results show that prefetching techniques are preferable for small-size caches (such as those expected at the drive level). For very large caches (such as those employed at the host level) caching techniques are preferable. For intermediate cache sizes (such as those at multi-device controllers) a combination of both strategies should be employed.

Abstract: The proliferation of peertopeer
(P2P) systems has come with various
compelling applications including file sharing based on distributed
hash tables (DHTs) or other kinds of overlay networks.
Searching the content of files (especially Web Search) requires
multikeyword
querying with scoring and ranking. Existing approaches
have no way of taking into account the correlation between
the keywords in the query. This paper presents our solution
that incorporates the queries and behavior of the users in the P2P
network such that interesting correlations can be inferred.

Abstract: We propose a new theoretical model for passively mobile Wireless Sensor Networks. We
call it the PALOMA model, standing for PAssively mobile LOgarithmic space MAchines. The main
modification w.r.t. the Population Protocol model [2] is that agents now, instead of being automata, are
Turing Machines whose memory is logarithmic in the population size n. Note that the new model is still
easily implementable with current technology. We focus on complete communication graphs. We define
the complexity class PLM, consisting of all symmetric predicates on input assignments that are stably
computable by the PALOMA model. We assume that the agents are initially identical. Surprisingly, it
turns out that the PALOMA model can assign unique consecutive ids to the agents and inform them
of the population size! This allows us to give a direct simulation of a Deterministic Turing Machine
of O(n log n) space, thus, establishing that any symmetric predicate in SPACE(n log n) also belongs
to PLM. We next prove that the PALOMA model can simulate the Community Protocol model [15],
thus, improving the previous lower bound to all symmetric predicates in NSPACE(n log n). Going
one step further, we generalize the simulation of the deterministic TM to prove that the PALOMA
model can simulate a Nondeterministic TM of O(n log n) space. Although providing the same lower
bound, the important remark here is that the bound is now obtained in a direct manner, in the sense
that it does not depend on the simulation of a TM by a Pointer Machine. Finally, by showing that a
Nondeterministic TM of O(n log n) space decides any language stably computable by the PALOMA
model, we end up with an exact characterization for PLM: it is precisely the class of all symmetric
predicates in NSPACE(n log n).

Abstract: We propose a new theoretical model for passively mobile Wireless Sensor Networks, called PM, standing for Passively mobile Machines. The main modification w.r.t. the Population Protocol model [Angluin et al. 2006] is that agents now, instead of being automata, are Turing Machines. We provide general definitions for unbounded memories, but we are mainly interested in computations upper-bounded by plausible space limitations. However, we prove that our results hold for more general cases. We focus on \emph{complete interaction graphs} and define the complexity classes PMSPACE(f(n)) parametrically, consisting of all predicates that are stably computable by some PM protocol that uses O(f(n)) memory in each agent. We provide a protocol that generates unique identifiers from scratch only by using O(log n) memory, and use it to provide an exact characterization of the classes PMSPACE(f(n)) when f(n) = Ω(log n): they are precisely the classes of all symmetric predicates in NSPACE(nf(n)). As a consequence, we obtain a space hierarchy of the PM model when the memory bounds are Ω(log n). We next explore the computability of the PM model when the protocols use o(loglog n) space per machine and prove that SEM = PMSPACE(f(n)) when f(n) = o(loglog n), where SEM denotes the class of the semilinear predicates. Finally, we establish that the minimal space requirement for the computation of non-semilinear predicates is O(log log n).

Abstract: We propose a new theoretical model for passively mobile Wireless Sensor Networks, called PM, standing for Passively mobile Machines. The main modification w.r.t. the Population Protocol model [Angluin et al. 2006] is that the agents now, instead of being automata, are Turing Machines. We provide general definitions for unbounded memories, but we are mainly interested in computations upper-bounded by plausible space limitations. However, we prove that our results hold for more general cases. We focus on complete interaction graphs and define the complexity classes PMSPACE(f(n)) parametrically, consisting of all predicates that are stably computable by some PM protocol that uses O(f(n)) memory in each agent. We provide a protocol that generates unique identifiers from scratch only by using O(log n) memory, and use it to provide an exact characterization of the classes PMSPACE(f(n)) when f(n)=Omega(log n): they are precisely the classes of all symmetric predicates in NSPACE(nf(n)). As a consequence, we obtain a space hierarchy of the PM model when the memory bounds are Omega(log n). Finally, we establish that the minimal space requirement for the computation of non-semilinear predicates is O(log log n).

Abstract: In this work we propose and develop a comprehensive
infrastructure, coined PastryStrings, for supporting rich
queries on both numerical (with range, and comparison
predicates) and string attributes, (accommodating equality,
prefix, suffix, and containment predicates) over DHT networks
utilising prefix-based routing. As event-based, publish/
subscribe information systems are a champion application
class, we formulate our solution in terms of this environment.

Abstract: In this work we propose and develop a comprehensive infrastructure, coined PastryStrings, for supporting rich queries on
both numerical (with range, and comparison predicates) and string attributes, (accommodating equality, prefix, suffix, and
containment predicates) over DHT networks utilising prefix-based routing. As event-based, publish/subscribe information
systems are a champion application class, we formulate our solution in terms of this environment.

Abstract: — This work discusses PatrasSense, a system utilizing a range of technologies for monitoring environmental conditions, based on the concept of participatory monitoring. We discuss here the main issues in this category of applications, and its relation to the Real World Internet vision. We are interested in the use of sensors for collecting data related to the quality of life in urban areas, making it publicly available. We describe a possible architecture for such a system and a plan for deployment in the city of Patras, Greece. We have thus far implemented a set of features and conducted an initial set of small-scale experiments using sensor devices mounted on vehicles. We report on the respective results

Abstract: We demonstrate the use of impairment constraint routing
for performance engineering of transparent metropolitan area
optical networks. Our results show the relationship between
blocking probability and different network characteristics such
as span length, amplifier noise figure, and hit rate,and provide
information on the system specifications required to achieve
acceptable network performance.

Abstract: We present a detailed performance evaluation of a
hybrid optical switching architecture called Overspill Routing in
Optical Networks (ORION). The ORION architecture combines
wavelength and (electronic) packet switching, so as to obtain the
advantages of both switching paradigms. We have developed an
extensive network simulator where the basic features of the
ORION architecture were modeled, including suitable loadvarying
sources and edge/core node architectures. Various aspects
of the ORION architecture were studied including the routing
policies used (i.e. once ORION always ORION and lightpath reentry)
and the various options available for the buffer
architecture. The complete network study shows that ORION can
absorb temporary traffic overloads, as intended, provided
sufficient buffering is present.

Abstract: Pervasive games represent a radically new game form that extends
gaming experiences out into the physical world, weaving ICTs
into the fabric of players’ real environment. This emerging type of
games is rather challenging for developers who are engaged in
exploring new technologies and methods to achieve high quality
interactive experience for players. This paper follows a systematic
approach to explore the landscape of pervasive gaming. First, we
present ten representative pervasive game projects, classified in
five genres based on their playing environment and features.
Then, we present a comparative view of those projects with
respect to several design aspects. Last, we shed light on current
trends, design principles and future directions for pervasive games
development.

Abstract: We present a polarization insensitive NOLM, employing a FRM and non-PM fibers as the nonlinear
medium. The circuit is configured to operate as a compressor and its performance remains stable irrespective of
any birefringence perturbations.

Abstract: In this work we extend the population protocol model of Angluin et al., in
order to model more powerful networks of very small resource limited
artefacts (agents) that is possible to follow some unpredictable passive
movement. These agents communicate in pairs according to the commands of
an adversary scheduler. A directed (or undirected) communication graph
encodes the following information: each edge (u,\~{o}) denotes that during the
computation it is possible for an interaction between u and \~{o} to happen in
which u is the initiator and \~{o} the responder. The new characteristic of
the proposed mediated population protocol model is the existance of a
passive communication provider that we call mediator. The mediator is a
simple database with communication capabilities. Its main purpose is to
maintain the permissible interactions in communication classes, whose
number is constant and independent of the population size. For this reason
we assume that each agent has a unique identifier for whose existence the
agent itself is not informed and thus cannot store it in its working
memory. When two agents are about to interact they send their ids to the
mediator. The mediator searches for that ordered pair in its database and
if it exists in some communication class it sends back to the agents the
state corresponding to that class. If this interaction is not permitted to
the agents, or, in other words, if this specific pair does not exist in
the database, the agents are informed to abord the interaction. Note that
in this manner for the first time we obtain some control on the safety of
the network and moreover the mediator provides us at any time with the
network topology. Equivalently, we can model the mediator by communication
links that are capable of keeping states from a edge state set of constant
cardinality. This alternative way of thinking of the new model has many
advantages concerning the formal modeling and the design of protocols,
since it enables us to abstract away the implementation details of the
mediator. Moreover, we extend further the new model by allowing the edges
to keep readable only costs, whose values also belong to a constant size
set. We then allow the protocol rules for pairwise interactions to modify
the corresponding edge state by also taking into account the costs. Thus,
our protocol descriptions are still independent of the population size and
do not use agent ids, i.e. they preserve scalability, uniformity and
anonymity. The proposed Mediated Population Protocols (MPP) can stably
compute graph properties of the communication graph. We show this for the
properties of maximal matchings (in undirected communication graphs), also
for finding the transitive closure of directed graphs and for finding all
edges of small cost. We demonstrate that our mediated protocols are
stronger than the classical population protocols. First of all we notice
an obvious fact: the classical model is a special case of the new model,
that is, the new model can compute at least the same things with the
classical one. We then present a mediated protocol that stably computes
the product of two nonnegative integers in the case where G is complete
directed and connected. Such kind of predicates are not semilinear and it
has been proven that classical population protocols in complete graphs can
compute precisely the semilinear predicates, thus in this manner we show
that there is at least one predicate that our model computes and which the
classical model cannot compute. To show this fact, we state and prove a
general Theorem about the composition of two mediated population
protocols, where the first one has stabilizing inputs. We also show that
all predicates stably computable in our model are (non-uniformly) in the
class NSPACE(m), where m is the number of edges of the communication
graph. Finally, we define Randomized MPP and show that, any Peano
predicate accepted by a Randomized MPP, can be verified in deterministic
polynomial time.

Abstract: With a rapidly aging population, the health-care community will
soon face a severe medical personnel shortage. It is imperative that automated
health monitoring technologies be developed to help meet this
shortage. In this direction, we are developing Ayushman, a health monitoring
infrastructure and testbed. The vision behind its development
is two-fold: first, to develop a wireless sensor-based automated health
monitoring system that can be used in diverse situations, from homebased
care, to disaster situations, without much customization; second,
to provide a testbed for implementing and testing communication protocols
and systems. Ayushman provides a collection of services which
enables it to perform this dual role. It possess a hierarchical cluster
topology which provides a fault-tolerant and reliable system by ensuring
that each tier in the hierarchy is self-contained and can survive on its
own in case of network partition. Ayushman is being implemented using
off-the-shelf and diverse hardware and software components, which
presents many challenges in system integration and operational reliability.
This is an ongoing project at the IMPACT lab at Arizona State
University1, and in this paper, we present our system¢s architecture and
some of our experiences in the development of its initial prototype.

Abstract: This Volume contains the 11 papers corresponding to poster and demo presentations
accepted to the 7th ACM/IEEE International Symposium on Modeling,
Analysis and Simulation ofWireless and Mobile Systems (MSWiM 04),
that is held October 4-6, 2004, in Venice, Italy.
MSWiM 2004 (http://www.cs.unibo.it/mswim2004/) is intended to provide
an international forum for original ideas, recent results and achievements on
issues and challenges related to mobile and wireless systems.
A Call for Posters was announced and widely disseminated, soliciting posters
that report on recent original results or on-going research in the area of wireless
and mobile networks. Prospective authors were encouraged to submit interesting
results on all aspects of modeling, analysis and simulation of mobile and
wireless networks and systems. The scope and topics of the Posters Session
were the same as those included in the MSWiM Call for Papers (see above).
Poster presentations were meant to provide authors with early feedback on
their research work and enable them to present their research and exchange
ideas during the Symposium.
All submissions to the call for posters as well as selected papers submitted
to MSWiM 04 were considered and reviewed. The review process resulted in
accepting the set of 11 papers included in this Volume. Accepted posters will
also be on display during the Symposium.
The set of papers in this Proceedings covers a wide range of important topics
in wireless and mobile computing, including channel allocation in wireless
networks, quality of service provisioning in IEEE 802.11 wireless LANs, IP
mobility support, energy conservation, routing in mobile adhoc networks, resource
sharing, wireless access to the WWW, sensor networks etc. The performance
evaluation techniques used include both analysis and simulation.
We hope that the poster papers included in this Volume will facilitate a fruitful
and lively discussion and exchange of interesting and creative ideas during
the Symposium.
We wish to thank the MSWiM Steering Committee Chair Azzedine Boukerche
and the Program Co-Chairs ofMSWiM 04 Carla-Fabiana Chiasserini and
Lorenzo Donatiello for their valuable help in the selection procedure. Also, the
MSWiM 04 Publicity Co-Chairs Luciano Bononi, Helen Karatza and Mirela
Sechi Moretti Annoni Notare for disseminating the Call for Posters.
We wish to warmly thank the Poster Proceedings Chair Ioannis Chatzigiannakis
for carefully doing an excellent job in preparing the Volume you now
hold in your hands.

Abstract: Wireless networks have received significant attention during the recent years. Especially, ad hoc wireless networks emerged due to their potential applications in battlefield, emergency disaster relief, etc. [13]. Unlike traditional wired networks or cellular wireless networks, no wired backbone infrastructure is installed for ad hoc wireless networks.

Abstract: Wireless sensor networks are composed of a vast number of ultra-small, fully autonomous computing, communication, and sensing devices, with very restricted energy and computing capabilities, that cooperate to accomplish a large sensing task. Such networks can be very useful in practice. The authors propose extended versions of two data propagation protocols: the Sleep-Awake Probabilistic Forwarding (SW-PFR) protocol and the Hierarchical Threshold-Sensitive Energy-Efficient Network (H-TEEN) protocol. These nontrivial extensions aim at improving the performance of the original protocols by introducing sleep-awake periods in the PFR case to save energy and introducing a hierarchy of clustering in the TEEN case to better cope with large network areas. The authors implemented the two protocols and performed an extensive comparison via simulation of various important measures of their performance with a focus on energy consumption. Data propagation under this approach exhibits high fault tolerance and increases network lifetime.

Abstract: Wireless sensor networks are composed of a vast number of ultra-small, fully autonomous computing, communication, and sensing devices, with very restricted energy and computing capabilities, that cooperate to accomplish a large sensing task. Such networks can be very useful in practice. The authors propose extended versions of two data propagation protocols: the Sleep-Awake Probabilistic Forwarding (SW-PFR) protocol and the Hierarchical Threshold-Sensitive Energy-Efficient Network (H-TEEN) protocol. These nontrivial extensions aim at improving the performance of the original protocols by introducing sleep-awake periods in the PFR case to save energy and introducing a hierarchy of clustering in the TEEN case to better cope with large network areas. The authors implemented the two protocols and performed an extensive comparison via simulation of various important measures of their performance with a focus on energy consumption. Data propagation under this approach exhibits high fault tolerance and increases network lifetime.

Abstract: In this work we focus on the energy efficiency challenge in wireless sensor networks, from both an on-line perspective (related to routing), as well as a network design perspective (related to tracking). We investigate a few representative, important aspects of energy efficiency: a) the robust and fast data propagation b) the problem of balancing the energy
dissipation among all sensors in the network and c) the problem of efficiently tracking moving
entities in sensor networks. Our work here is a methodological survey of selected results that
have alre dy appeared in the related literature.
In particular, we investigate important issues of energy optimization, like minimizing the total
energy dissipation, minimizing the number of transmissions as well as balancing the energy
load to prolong the system¢s lifetime. We review characteristic protocols and techniques in the recent literature, including probabilistic forwarding and local optimization methods. We study the problem of localizing and tracking multiple moving targets from a network design perspective i.e. towards estimating the least possible number of sensors, their positions and operation characteristics needed to efficiently perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps. Under this model, we abstract the tracking network design problem by a covering combinatorial problem and then design and analyze an efficient approximate method for sensor placement
and operation.

Abstract: The content-based publish/subscribe (pub/sub) paradigm for system design is becoming increasingly popular, offering
unique benefits for a large number of data-intensive applications. Coupled with the peer-to-peer technology, it can serve as
a central building block for developing data-dissemination applications deployed over a large-scale network infrastructure.
A key open problem towards the creation of large-scale content-based pub/sub infrastructures relates to efficiently and accu-
rately matching subscriptions with substring predicates to incoming events. This work addresses this issue.

Abstract: The content-based publish/subscribe (pub/sub)paradigm for system design is becoming increasingly popular, offering unique benefits for a large number of data-intensive applications. Coupled with the peer-to-peer technology, it can serve as a central building block for such applications
deployed over a large-scale network infrastructure. A key problem toward the creation of large-scale contentbased pub/sub infrastructures relates to dealing efficiently with continuous queries (subscriptions) with rich predicates on string attributes; In particular, efficiently and accurately
matching substring queries to incoming events is an open problem. In this work we study this problem. We provide and analyze novel algorithms for processing subscriptions with substring predicates and events in a variety of environments. We provide experimental data demonstrating the
relative performance behavior of the proposed algorithms using as key metrics the network bandwidth requirements, number of messages, load balancing, as well as requirements for extra routing state (and related maintenance) and design flexibility.

Abstract: We employ here the Probabilistic Method, a way of reasoning which shows existence of combinatorial structures and properties to prove refute conjectures. The radiocoloring problem (RCP) is the problem of assigning frequencies to the transmitters of a network so that transmitters of distance one get frequencies that di#er by at least two and any two transmitters of distance one get frequencies that di#er by at least one. The objective of an assignment may be to minimize the number of frequencies used (order) or the range of them (span). Here, we study the optimization version of RCP where the objective is to minimize the order. In graph theory terms the problem is modelled by a variation of the vertex graph coloring problem. We investigate upper bounds for the minimum number of colors needed in a radiocoloring assignment of a graph G. We first provide an upper bound for the minimum number of colors needed to radiocolor a graph G of girth at most 7. Then, we study whether the minimum order of a radiocoloring assignment is determined by local conditions, i.e. by the minimum order radiocoloring assignment of some small subgraphs of it. We state a related conjecture which is analogous to a theorem of Molloy and Reed for the vertex coloring problem [12]. We then investigate whether the conjecture contradicts a Theorem of Molloy and Reed for the vertex coloring when applied on the graph G 2

Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters, by exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. A Radiocoloring (RC) of a graph G(V,E) is an assignment function View the MathML source such that |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted2, when u,v are neighbors in G, and |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted1 when the distance of u,v in G is two. The number of discrete frequencies and the range of frequencies used are called order and span, respectively. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span or the order. In this paper we prove that the radiocoloring problem for general graphs is hard to approximate (unless NP=ZPP) within a factor of n1/2-{\aa} (for any View the MathML source), where n is the number of vertices of the graph. However, when restricted to some special cases of graphs, the problem becomes easier. We prove that the min span RCP is NP-complete for planar graphs. Next, we provide an O(n{\"A}) time algorithm (|V|=n) which obtains a radiocoloring of a planar graph G that approximates the minimum order within a ratio which tends to 2 (where {\"A} the maximum degree of G). Finally, we provide a fully polynomial randomized approximation scheme (fpras) for the number of valid radiocolorings of a planar graph G with {\"e} colors, in the case where {\"e}greater-or-equal, slanted4{\"A}+50.

Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters, by exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. A Radiocoloring (RC) of a graph G(V,E) is an assignment function View the MathML source such that |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted2, when u,v are neighbors in G, and |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted1 when the distance of u,v in G is two. The number of discrete frequencies and the range of frequencies used are called order and span, respectively. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span or the order. In this paper we prove that the radiocoloring problem for general graphs is hard to approximate (unless NP=ZPP) within a factor of n1/2-{\aa} (for any View the MathML source), where n is the number of vertices of the graph. However, when restricted to some special cases of graphs, the problem becomes easier. We prove that the min span RCP is NP-complete for planar graphs. Next, we provide an O(n{\"A}) time algorithm (|V|=n) which obtains a radiocoloring of a planar graph G that approximates the minimum order within a ratio which tends to 2 (where {\"A} the maximum degree of G). Finally, we provide a fully polynomial randomized approximation scheme (fpras) for the number of valid radiocolorings of a planar graph G with {\"e} colors, in the case where {\"e}greater-or-equal, slanted4{\"A}+50.

Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. A Radiocoloring (RC) of a graph G(V,E) is an assignment function View the MathML source such that |{\"E}(u)−{\"E}(v)|greater-or-equal, slanted2, when u,v are neighbors in G, and |{\"E}(u)−{\"E}(v)|greater-or-equal, slanted1 when the distance of u,v in G is two. The discrete number of frequencies used is called order and the range of frequencies used, span. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span (min span RCP) or the order (min order RCP).
In this paper, we deal with an interesting, yet not examined until now, variation of the radiocoloring problem: that of satisfying frequency assignment requests which exhibit some periodic behavior. In this case, the interference graph (modelling interference between transmitters) is some (infinite) periodic graph. Infinite periodic graphs usually model finite networks that accept periodic (in time, e.g. daily) requests for frequency assignment. Alternatively, they can model very large networks produced by the repetition of a small graph.
A periodic graph G is defined by an infinite two-way sequence of repetitions of the same finite graph Gi(Vi,Ei). The edge set of G is derived by connecting the vertices of each iteration Gi to some of the vertices of the next iteration Gi+1, the same for all Gi. We focus on planar periodic graphs, because in many cases real networks are planar and also because of their independent mathematical interest.
We give two basic results:
• We prove that the min span RCP is PSPACE-complete for periodic planar graphs.
• We provide an O(n({\"A}(Gi)+{\'o})) time algorithm (where|Vi|=n, {\"A}(Gi) is the maximum degree of the graph Gi and {\'o} is the number of edges connecting each Gi to Gi+1), which obtains a radiocoloring of a periodic planar graph G that approximates the minimum span within a ratio which tends to View the MathML source as {\"A}(Gi)+{\'o} tends to infinity.
We remark that, any approximation algorithm for the min span RCP of a finite planar graph G, that achieves a span of at most {\'a}{\"A}(G)+constant, for any {\'a} and where {\"A}(G) is the maximum degree of G, can be used as a subroutine in our algorithm to produce an approximation for min span RCP of asymptotic ratio {\'a} for periodic planar graphs.

Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. The Radiocoloring (RC) of a graph G(V,E) is an assignment function {\"O}: V → IN such that ∣{\"O}(u) - {\"O}(v)∣ ≥2, when u, v are neighbors in G, and ∣{\"O}(u) - {\"O}(v)∣ ≥1 when the distance of u, v in G is two. The range of frequencies used is called span. Here, we consider the optimization version of the Radiocoloring Problem (RCP) of finding a radiocoloring assignment of minimum span, called min span RCP. In this paper, we deal with a variation of RCP: that of satisfying frequency assignment requests with some periodic behavior. In this case, the interference graph is an (infinite) periodic graph. Infinite periodic graphs model finite networks that accept periodic (in time, e.g. daily) requests for frequency assignment. Alternatively, they may model very large networks produced by the repetition of a small graph. A periodic graph G is defined by an infinite two-way sequence of repetitions of the same finite graph G i (V i ,E i ). The edge set of G is derived by connecting the vertices of each iteration G i to some of the vertices of the next iteration G i +1, the same for all G i . The model of periodic graphs considered here is similar to that of periodic graphs in Orlin [13], Marathe et al [10]. We focus on planar periodic graphs, because in many cases real networks are planar and also because of their independent mathematical interest. We give two basic results: - We prove that the min span RCP is PSPACE-complete for periodic planar graphs. - We provide an O(n({\"A}(G i ) + {\'o})) time algorithm, (where ∣V i ∣ = n, {\"A}(G i ) is the maximum degree of the graph G i and {\'o} is the number of edges connecting each G i to G i +1), which obtains a radiocoloring of a periodic planar graph G that approximates the minimum span within a ratio which tends to 2 as {\"A}(Gi) + {\'o} tends to infinity.

Abstract: In the last few years there has been a great amount of interest in Random Constraint Satisfaction
Problems, both from an experimental and a theoretical point of view. Quite intriguingly, experimental results
with various models for generating random CSP instances suggest that the probability of such problems having
a solution exhibits a ?threshold-like? behavior. In this spirit, some preliminary theoretical work has been done
in analyzing these models asymptotically, i.e., as the number of variables grows. In this paper we prove that,
contrary to beliefs based on experimental evidence, the models commonly used for generating random CSP
instances do not have an asymptotic threshold. In particular, we prove that asymptotically almost all instances
they generate are overconstrained, suffering from trivial, local inconsistencies. To complement this result we
present an alternative, single-parameter model for generating random CSP instances and prove that, unlike
current models, it exhibits non-trivial asymptotic behavior. Moreover, for this new model we derive explicit
bounds for the narrow region within which the probability of having a solution changes dramatically

Abstract: In this work we present an efficient algorithm which, with high probability, provides
an almost uniform sample from the set of proper $\chi$-colourings of an instance of sparse
random graphs $G_{n,d/n}$, where $\chi=\chi(d)$ is a sufficiently large constant.
This work improves, asymptotically, the result of Dyer, Flaxman Frieze and Vigoda
in \cite{previous-result} where the algorithm proposed there needs at least
$\Theta(\frac{\log \log n}{\log \log \log n})$ colours.

Abstract: In this work we address the issue of efficient processing of range queries in DHT-based P2P data networks. The novelty of the proposed approach lies on architectures, algorithms, and mechanisms for identifying and appropriately exploiting powerful nodes in such networks. The existence of such nodes has been well documented in the literature and plays a key role in the architecture of most successful real-world P2P applications. However, till now, this heterogeneity has not been taken into account when architecting solutions for complex query processing, especially in DHT networks. With this work we attempt to fill this gap for optimizing the processing of range queries. Significant performance improvements are achieved due to (i) ensuring a much smaller hop count performance for range queries, and (ii) avoiding the dangers and inefficiencies of relying for range query processing on weak nodes, with respect to processing, storage, and communication capacities, and with intermittent connectivity. We present detailed experimental results validating our performance claims.

Abstract: In this paper we present a signaling protocol for
QoS differentiation suitable for optical burst switching networks.
The proposed protocol is a two-way reservation scheme that
employs delayed and in-advance reservation of resources. In this
scheme delayed reservations may be relaxed, introducing a
reservation duration parameter that is negotiated during call
setup phase. This feature allows bursts to reserve resources
beyond their actual size to increase their successful forwarding
probability and is used to provide QoS differentiation. The
proposed signaling protocol offers a low blocking probability for
bursts that can tolerate the round-trip delay required for the
reservations. We present the main features of the protocol and
describe in detail timing considerations regarding the call setup
and the reservation process. We also describe several methods
for choosing the protocol parameters so as to optimize
performance and present corresponding evaluation results.
Furthermore, we compare the performance of the proposed
protocol against that of two other typical reservation protocols, a
Tell-and-Wait and a Tell-and-Go protocol.

Abstract: We consider the conflicting problems of ensuring data-access load balancing and efficiently processing range queries on peer-to-peer data networks maintained over Distributed Hash Tables (DHTs). Placing consecutive data values in neighboring peers is frequently used in DHTs since it accelerates range query processing. However, such a placement is highly susceptible to load imbalances, which are preferably handled by replicating data (since replication also introduces fault tolerance benefits). In this paper, we present HotRoD, a DHT-based architecture that deals effectively with this combined problem through the use of a novel locality-preserving hash function, and a tunable data replication mechanism which allows trading off replication costs for fair load distribution. Our detailed experimentation study shows strong gains in both range query processing efficiency and data-access load balancing, with low replication overhead. To our knowledge, this is the first work that concurrently addresses the two conflicting problems using data replication.

Abstract: We consider the conflicting problems of ensuring data-access load balancing and efficiently processing range queries on peer-to-peer data networks maintained over Distributed Hash Tables (DHTs). Placing consecutive data values in neighboring peers is frequently used in DHTs since it accelerates range query processing. However, such a placement is highly susceptible to load imbalances, which are preferably handled by replicating data (since replication also introduces fault tolerance benefits). In this paper, we present HotRoD, a DHT-based architecture that deals effectively with this combined problem through the use of a novel locality-preserving hash function, and a tunable data replication mechanism which allows trading off replication costs for fair load distribution. Our detailed experimentation study shows strong gains in both range query processing efficiency and data-access load balancing, with low replication overhead. To our knowledge, this is the first work that concurrently addresses the two conflicting problems using data replication.

Abstract: The problem of robust line planning requests for a set of
origin-destination paths (lines) along with their tra±c rates (frequencies)
in an underlying railway network infrastructure, which are robust to
°uctuations of real-time parameters of the solution.
In this work, we investigate a variant of robust line planning stemming
from recent regulations in the railway sector that introduce competition
and free railway markets, and set up a new application scenario: there is
a (potentially large) number of line operators that have their lines ¯xed
and operate as competing entities struggling to exploit the underlying
network infrastructure via frequency requests, while the management of
the infrastructure itself remains the responsibility of a single (typically
governmental) entity, the network operator.
The line operators are typically unwilling to reveal their true incentives.
Nevertheless, the network operator would like to ensure a fair (or, socially
optimal) usage of the infrastructure, e.g., by maximizing the (unknown to
him) aggregate incentives of the line operators. We show that this can be
accomplished in certain situations via a (possibly anonymous) incentive-
compatible pricing scheme for the usage of the shared resources, that is
robust against the unknown incentives and the changes in the demands
of the entities. This brings up a new notion of robustness, which we
call incentive-compatible robustness, that considers as robustness of the
system its tolerance to the entities' unknown incentives and elasticity
of demands, aiming at an eventual stabilization to an equilibrium point
that is as close as possible to the social optimum.

Abstract: The ROTA project is aimed at offering high-speed network connections to ships traveling at archipelagos, i.e. seas with dense clusters of islands, such as the Aegean sea. The ROTA system is based on the use of directional antennas on-board the ships and stationary antennas on the islands. The stationary antennas on the islands are connected to high-speed commercial computer networks, which are available on practically all inhabited islands at the Aegean sea, thus implementing a Virtual Private Network (VPN). This approach is much more cost-effective when compared to current approaches that rely on the usage of satellite communications. This paper presents the concept behind ROTA platform.

Abstract: Orthogonal Frequency Division Multiplexing (OFDM)
has been recently proposed as a modulation technique for optical
networks, due to its good spectral efficiency and impairment
tolerance. Optical OFDM is much more flexible compared to
traditional WDM systems, enabling elastic bandwidth
transmissions. We consider the planning problem of an OFDMbased optical network where we are given a traffic matrix that
includes the requested transmission rates of the connections to be
served. Connections are provisioned for their requested rate by
elastically allocating spectrum using a variable number of OFDM
subcarriers. We introduce the Routing and Spectrum Allocation
(RSA) problem, as opposed to the typical Routing and
Wavelength Assignment (RWA) problem of traditional WDM
networks, and present various algorithms to solve the RSA. We
start by presenting an optimal ILP RSA algorithm that minimizes
the spectrum used to serve the traffic matrix, and also present a
decomposition method that breaks RSA into two substituent
subproblems, namely, (i) routing and (ii) spectrum allocation
(R+SA) and solves them sequentially. We also propose a heuristic
algorithm that serves connections one-by-one and use it to solve
the planning problem by sequentially serving all traffic matrix
connections. To feed the sequential algorithm, two ordering
policies are proposed; a simulated annealing meta-heuristic is also
proposed to obtain even better orderings. Our results indicate
that the proposed sequential heuristic with appropriate ordering
yields close to optimal solutions in low running times.

Abstract: We consider information aggregation as a method for reducing the information exchanged in a Grid network and used by the resource manager in order to make scheduling decisions. In this way, information is summarized across nodes and sensitive or detailed information can be kept private, while resources are still publicly available for use. We present a general framework for information aggregation, trying to identify issues that relate to aggregation in Grids. In this context, we describe a number of techniques, including single point and intra-domain aggregation, define appropriate grid-specific domination relations and operators for aggregating static and dynamic resource information, and discuss resource selection optimization functions. The quality of an aggregation scheme is measured both by its effects on the efficiency of the scheduler¢s decisions and also by the reduction it brings on the amount of resource information recorded, a tradeoff that we examine in detail. Simulation experiments demonstrate that the proposed schemes achieve significant information reduction, either in the amount of information exchanged, or in the frequency of the updates, while at the same time maintaining most of the value of the original information as expressed by a stretch factor metric we introduce.

Abstract: We present SeAl1, a novel data/resource and data-access management infrastructure designed for the purpose of addressing a key problem in P2P data sharing networks, namely the problem of wide-scale selfish peer behavior. Selfish behavior has been manifested and well documented and it is widely accepted that unless this is dealt with, the scalability, efficiency, and the usefulness of P2P sharing networks will be diminished. SeAl essentially consists of a monitoring/accounting subsystem, an auditing/verification subsystem, and incentive mechanisms. The monitoring subsystem facilitates the classification of peers into selfish/altruistic. The auditing/verification layer provides a shield against perjurer/slandering and colluding peers that may try to cheat the monitoring subsystem. The incentives mechanisms efectively utilize these layers so to increase the computational/networking and data resources that are available to the community. Our extensive performance results show that SeAl performs its tasks swiftly, while the overhead introduced by our accounting and auditing mechanisms in terms of response time, network, and storage overheads are very small.

Abstract: We present SeAl, a novel data/resource and data-access management infrastructure designed for the purpose of addressing a key problem in P2P data sharing networks, namely the problem of wide-scale selfish peer behavior. Selfish behavior has been manifested and well documented and it is widely accepted that unless this is dealt with, the scalability, efficiency, and the usefulness of P2P sharing networks will be diminished. SeAl essentially consists of a monitoring/accounting subsystem, an auditing/verification subsystem, and incentive mechanisms. The monitoring subsystem facilitates the classification of peers into selfish/altruistic. The auditing/verification layer provides a shield against perjurer/slandering and colluding peers that may try to cheat the monitoring subsystem. The incentives mechanisms effectively utilize these layers so to increase the computational/networking and data resources that are available to the community. Our extensive performance results show that SeAl performs its tasks swiftly, while the overhead introduced by our accounting and auditing mechanisms in terms of response time, network, and storage overheads are very small.

Abstract: During the last years web search engines have moved from the simple but inefficient syntactical analysis (first generation) to the more robust and usable web graph analysis (second generation). Much of the current research is focussed on the so-called third generation search engines that, in principle, inject human characteristics on how results are obtained and presented to the end user. Approaches exploited towards this direction include (among others): an alteration of PageRank [1] that takes into account user specific characteristics and bias the page ordering using the user preferences (an approach, though, that does not scale well with the number of users). The approach is further exploited in [3], where several PageRanks are computed for a given number of distinct search topics. A similar idea is used in [6], where the PageRank computation takes into account the content of the pages and the query terms the surfer is looking for. In [4], a decomposition of PageRank to basic components is suggested that may be able to scale the different PageRank computations to a bigger number of topics or even distinct users. Another approach to web search is presented in [2], where a rich extension of the web, called semantic web, and the application of searching over this new setting is described.

Abstract: In this Phd thesis,, we try to use formal logic and threshold phenomena that asymptotically emerge with certainty in order to build new trust models and to evaluate the existing one. The departure point of our work is that dynamic, global computing systems are not amenable to a static viewpoint of the trust concept, no matter how this concept is formalized. We believe that trust should be a statistical, asymptotic concept to be studied in the limit as the system's components grow according to some growth rate. Thus, our main goal is to define trust as an emerging system property that ``appears'' or "disappears" when a set of properties hold, asymptotically with probability$ 0$ or $1$ correspondingly . Here we try to combine first and second order logic in order to analyze the trust measures of specific network models. Moreover we can use formal logic in order to determine whether generic reliability trust models provide a method for deriving trust between peers/entities as the network's components grow. Our approach can be used in a wide range of applications, such as monitoring the behavior of peers, providing a measure of trust between them, assessing the level of reliability of peers in a network. Wireless sensor networks are comprised of a vast number of ultra-small autonomous computing, communication and sensing devices, with restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Sensor networks can be very useful in practice. Such systems should at least guarantee the confidentiality and integrity of the information reported to the controlling authorities regarding the realization of environmental events. Therefore, key establishment is critical for the protection in wireless sensor networks and the prevention of adversaries from attacking the network. Finally in this dissertation we also propose three distributed group key establishment protocols suitable for such energy constrained networks. This dissertation is composed of two parts. Part I develops the theory of the first and second order logic of graphs - their definition, and the analysis of their properties that are expressible in the {\em first order language} of graphs. In part II we introduce some new distributed group key establishment protocols suitable for sensor networks. Several key establishment schemes are derived and their performance is demonstrated.

Abstract: Embedded computing devices dominate our everyday activities, from cell phones to wireless sensors that collect and process data for various applications. Although desktop and high-end server security seems to be under control by the use of current security technology, securing the low-end embedded computing systems is a difficult long-term problem. This is mainly due to the fact that the embedded systems are constrained by their operational environment and the limited resources they are equipped with. Recent research activities focus on the deployment of lightweight cryptographic algorithms and security protocols that are well suited to the limited resources of low-end embedded systems. Elliptic Curve Cryptography (ECC) offers an interesting alternative to the classical public key cryptography for embedded systems (e.g., RSA and ElGamal), since it uses smaller key sizes for achieving the same security level, thus making ECC an attractive and efficient alternative for deployment in embedded systems. In this chapter, the processing requirements and architectures for secure network access, communication functions, storage, and high availability of embedded devices are discussed. In addition, ECC-based state-of-the-art lightweight cryptographic primitives for the deployment of security protocols in embedded systems that fulfill the requirements are presented.

Abstract: The Greek School Network (GSN) is a closed educational network that offers advanced
telematic and networking services to all primary/secondary education schools and
educational administration offices in Greece. The primary objective of GSN is to provide a network infrastructure for the interconnection of school PC laboratories so that modern educational methods and pedagogical models can be used in the school community securely and effectively. GSN has scaled in size, reached maturity, and is currently delivering a wide range of network and telematic services to students and educators. Being the second largest
communications network nationwide, GSN is exposed to all kinds of security threats and, due to its educational hypostasis, naive user behaviour. The current paper presents an evaluation of security management solutions for the enforcement of policies, practices, and user protection methodologies proven viable within the GSN environment, as indicated by statistics and metrics on the use of the related services. The paper reaches the conclusion that GSN security services constitute a sound framework that can successfully cover the needs of
the school community.

Abstract: Our position is that a key to research efforts on ensuring high
performance in very large scale sharing networks is the idea of
volunteering; recognizing that such networks are comprised of
largely heterogeneous nodes in terms of their capacity and
behaviour, and that, in many real-world manifestations, a few
nodes carry the bulk of the request service load. In this paper we
outline how we employ volunteering as the basic idea using
which we develop altruism-endowed self-organizing sharing
networks to help solve two open problems in large-scale peer-topeer
networks: (i) to develop an overlay topology structure that
enjoys better performance than DHT-structured networks and,
specifically, to offer O(log log N) routing performance in a
network of N nodes, instead of O(log N), and (ii) to efficiently
process complex queries and range queries, in particular.

Abstract: In this work, we study protocols so that populations of distributed processes can construct networks. In order to highlight the basic principles of distributed network construction, we keep the model minimal in all respects. In particular, we assume finite-state processes that all begin from the same initial state and all execute the same protocol. Moreover, we assume pairwise interactions between the processes that are scheduled by a fair adversary. In order to allow processes to construct networks, we let them activate and deactivate their pairwise connections. When two processes interact, the protocol takes as input the states of the processes and the state of their connection and updates all of them. Initially all connections are inactive and the goal is for the processes, after interacting and activating/deactivating connections for a while, to end up with a desired stable network. We give protocols (optimal in some cases) and lower bounds for several basic network construction problems such as spanning line, spanning ring, spanning star, and regular network. The expected time to convergence of our protocols is analyzed under a uniform random scheduler. Finally, we prove several universality results by presenting generic protocols that are capable of simulating a Turing Machine (TM) and exploiting it in order to construct a large class of networks. We additionally show how to partition the population into k supernodes, each being a line of log k nodes, for the largest such k. This amount of local memory is sufficient for the supernodes to obtain unique names and exploit their names and their memory to realize nontrivial constructions.

Abstract: In this work, we study protocols (i.e. distributed algorithms) so that populations of distributed processes can construct networks. In order to highlight the basic principles of distributed network construction we keep the model minimal in all respects. In particular, we assume finite-state processes that all begin from the same initial state and all execute the same protocol (i.e. the system is homogeneous). Moreover, we assume pairwise interactions between the processes that are scheduled by an adversary. The only constraint on the adversary scheduler is that it must be fair, intuitively meaning that it must assign to every reachable configuration of the system a non-zero probability to occur. In order to allow processes to construct networks, we let them activate and deactivate their pairwise connections. When two processes interact, the protocol takes as input the states of the processes and the state of their connection and updates all of them. In particular, in every interaction, the protocol may activate an inactive connection, deactivate an active one, or leave the state of a connection unchanged. Initially all connections are inactive and the goal is for the processes, after interacting and activating/deactivating connections for a while, to end up with a desired stable network (i.e. one that does not change any more). We give protocols (optimal in some cases) and lower bounds for several basic network construction problems such as spanning line, spanning ring, spanning star, and regular network. We provide proofs of correctness for all of our protocols and analyze the expected time to convergence of most of them under a uniform random scheduler that selects the next pair of interacting processes uniformly at random from all such pairs. Finally, we prove several universality results by presenting generic protocols that are capable of simulating a Turing Machine (TM) and exploiting it in order to construct a large class of networks. Our universality protocols use a subset of the population (waste) in order to distributedly construct there a TM able to decide a graph class in some given space. Then, the protocols repeatedly construct in the rest of the population (useful space) a graph equiprobably drawn from all possible graphs. The TM works on this and accepts if the presented graph is in the class. We additionally show how to partition the population into k supernodes, each being a line of log k nodes, for the largest such k. This amount of local memory is sufficient for the supernodes to obtain unique names and exploit their names and their memory to realize nontrivial constructions. Delicate composition and reinitialization issues have to be solved for these general constructions to work.

Abstract: We present recent advances in multi-wavelength, power-equalized laser sources that incorporate a semiconductor optical amplifier (SOA) and simple optical filters, such as Lyot-type and Fabry-Perot, for comb generation. Both linear and ring-cavity configurations are presented, and single-pass optical feedback technique is proposed to improve the performance in terms of the number of simultaneously oscillating lines and output channel power equalization. This technique resulted in a broadened oscillating spectrum of 52 lines spaced at 50 GHz, power-equalized within 0.3 dB. Finally, a simplified version that uses only an uncoated SOA for both gain and comb generation is demonstrated.

Abstract: In this paper we address the problem of capturing and pro-
cessing certain spatiotemporal, social characteristics of hu-
man interactions with the use of Wireless Sensor Networks.
Using TelosB motes, we basically monitor the binary prox-
imity within a group of people. The collected data give an
insight of how people interact with each other (how often,
for how much time, in which room) and provide a novel tool
(which can be further enhanced) to study (quantitatively, in
an automated manner) human social networks.

Abstract: In this paper we address the problem of capturing and pro-
cessing certain spatiotemporal, social characteristics of hu-
man interactions with the use of Wireless Sensor Networks.
Using TelosB motes, we basically monitor the binary prox-
imity within a group of people. The collected data give an
insight of how people interact with each other (how often,
for how much time, in which room) and provide a novel tool
(which can be further enhanced) to study (quantitatively, in
an automated manner) human social networks.

Abstract: Efficient task scheduling is fundamental for the success of the Grids,
since it directly affects the Quality of Service (QoS) offered to the users. Efficient
scheduling policies should be evaluated based not only on performance
metrics that are of interest to the infrastructure side, such as the Grid resources
utilization efficiency, but also on user satisfaction metrics, such as the percentage
of tasks served by the Grid without violating their QoS requirements. In this
paper, we propose a scheduling algorithm for tasks with strict timing requirements,
given in the form of a desired start and finish time. Our algorithm aims
at minimizing the violations of the time constraints, while at the same time
minimizing the number of processors used. The proposed scheduling method
exploits concepts derived from spectral clustering, and groups together for assignment
to a computing resource the tasks so to a) minimize the time overlapping
of the tasks assigned to a given processor and b) maximize the degree of
time overlapping among tasks assigned to different processors. Experimental
results show that our proposed strategy outperforms greedy scheduling algorithms
for different values of the task load submitted.

Abstract: Efficient task scheduling is fundamental for the success of the Grids,
since it directly affects the Quality of Service (QoS) offered to the users. Efficient
scheduling policies should be evaluated based not only on performance
metrics that are of interest to the infrastructure side, such as the Grid resources
utilization efficiency, but also on user satisfaction metrics, such as the percentage
of tasks served by the Grid without violating their QoS requirements. In this
paper, we propose a scheduling algorithm for tasks with strict timing requirements,
given in the form of a desired start and finish time. Our algorithm aims
at minimizing the violations of the time constraints, while at the same time
minimizing the number of processors used. The proposed scheduling method
exploits concepts derived from spectral clustering, and groups together for assignment
to a computing resource the tasks so to a) minimize the time overlapping
of the tasks assigned to a given processor and b) maximize the degree of
time overlapping among tasks assigned to different processors. Experimental
results show that our proposed strategy outperforms greedy scheduling algorithms
for different values of the task load submitted.

Abstract: In this paper, we analyze the stability properties of the FIFO protocol in the Adversarial Queueing model for packet routing. We show a graph for which FIFO is stable for any adversary with injection rate r ≰ 0.1428. We generalize this results to show upper bound for stability of any network under FIFO protocol, answering partially an open question raised by Andrews et al. in [2]. We also design a network and an adversary for which FIFO is non-stable for any r ≱ 0.8357, improving the previous known bounds of [2].

Abstract: The existence of good probabilistic models
for the job arrival process and the delay components
introduced at different stages of job processing in a
Grid environment is important for the improved
understanding of the Grid computing concept. In this
study, we present a thorough analysis of the job
arrival process in the EGEE infrastructure and of the
time durations a job spends at different states in the
EGEE environment. We define four delay compo-
nents of the total job delay and model each compo-
nent separately. We observe that the job inter-arrival
times at the Grid level can be adequately modelled by
a rounded exponential distribution, while the total job
delay (from the time it is generated until the time it
completes execution) is dominated by the computing
element’s register and queuing times and the worker
node’s execution times. Further, we evaluate the
efficiency of the EGEE environment by comparing
the job total delay performance with that of a hypothetical ideal super-cluster and conclude that we
would obtain similar performance if we submitted the
same workload to a super-cluster of size equal to 34%
of the total average number of CPUs participating in
the EGEE infrastructure. We also analyze the job
inter-arrival times, the CE’s queuing times, the WN’s
execution times, and the data sizes exchanged at the
kallisto.hellasgrid.gr cluster, which is node in the
EGEE infrastructure. In contrast to the Grid level, we
find that at the cluster level the job arrival process
exhibits self-similarity/long-range dependence. Final-
ly, we propose simple and intuitive models for the job
arrival process and the execution times at the cluster
level.

Abstract: We consider the important problem of energy balanced data propagation in wireless sensor networks and we extend and generalize
previous works by allowing adaptive energy assignment. We consider the data gathering problem where data are generated by the sensors and
must be routed toward a unique sink. Sensors route data by either sending the data directly to the sink or in a multi-hop fashion by delivering
the data to a neighbouring sensor. Direct and neighbouring transmissions require different levels of energy consumption. Basically, the protocols balance the energy consumption among the sensors by computing the adequate ratios of direct and neighbouring transmissions. An abstract model of energy dissipation as a random walk is proposed, along with rigorous performance analysis techniques. Two efficient distributed algorithms are presented and analysed, by both rigorous means and simulation.
The first one is easy to implement and fast to execute. The protocol assumes that sensors know a-priori the rate of data they generate.
The sink collects and processes all these information in order to compute the relevant value of the protocol parameter. This value is transmitted
to the sensors which individually compute their optimal ratios of direct and neighbouring transmissions. The second protocol avoids the necessary a-priori knowledge of the data rate generated by sensors by inferring the relevant information from the observation of the data paths.
Furthermore, this algorithm is based on stochastic estimation methods and is adaptive to environmental changes.

Abstract: In emerging pervasive scenarios, data is collected by sensing devices in streams that occur at several distributed points of observation. The size of the data typically far exceeds the storage and computational capabilities of the tiny devices that have to collect and process them. A general and challenging task is to allow (some of) the nodes of a pervasive network to collectively perform monitoring of a neighbourhood of interest by issuing continuous aggregate queries on the streams observed in its vicinity. This class of algorithms is fully decentralized and diffusive in nature: collecting all the data at a few central nodes of the network is unfeasible in networks of low capability devices or in the presence of massive data sets. Two main problems arise in this scenario: (i) the intrinsic complexity of maintaining statistics over a data stream whose size greatly exceeds the capabilities of the device that performs the computation; (ii) composing the partial outcomes computed at different points of observation into an accurate, global statistic over a neighbourhood of interest, which entails coping with several problems, last but not least the receipt of duplicate information along multiple paths of diffusion.
Streaming techniques have emerged as powerful tools to achieve the general goals described above, in the first place because they assume a computational model in which computational and storage resources are assumed to be far exceeded by the amount of data on which computation occurs. In this contribution, we review the main streaming techniques and provide a classification of the computational problems and the applications they effectively address, with an emphasis on decentralized scenarios, which are of particular interest in pervasive networks

Abstract: In this paper, we present and study solutions for the efficient processing of queries over string attributes in a large P2P data network implemented with DHTs. The proposed solutions support queries with equality, prefix, suffix, and containment predicates over string attributes. Currently, no known solution to this problem exists.
We propose and study algorithms for processing such queries and their optimizations. As event-based, Publish/Subscribe information systems are a champion application class where string attribute (continuous) queries are very common, we pay particular attention to this type of data networks, formulating our solution in terms of this environment. A major design decision behind the proposed solution is our intention to provide a solution that is general (DHT-independent), capable of being implemented on top of any particular DHT.

Abstract: We study extreme Nash equilibria in the context of a selfish routing game. Specifically, we assume a collection of n users, each employing a mixed strategy, which is a probability distribution over m parallel identical links, to control the routing of its own assigned traffic. In a Nash equilibrium, each user selfishly routes its traffic on those links that minimize its expected latency cost. The social cost of a Nash equilibrium is the expectation, over all random choices of the users, of the maximum, over all links, latency through a link.
We provide substantial evidence for the Fully Mixed Nash Equilibrium Conjecture, which states that the worst Nash equilibrium is the fully mixed Nash equilibrium, where each user chooses each link with positive probability. Specifically, we prove that the Fully Mixed Nash Equilibrium Conjecture is valid for pure Nash equilibria. Furthermore, we show, that under a certain condition, the social cost of any Nash equilibrium is within a factor of 2h(1+ɛ) of that of the fully mixed Nash equilibrium, where h is the factor by which the largest user traffic deviates from the average user traffic.
Considering pure Nash equilibria, we provide a PTAS to approximate the best social cost, we give an upper bound on the worst social cost and we show that it is View the MathML source-hard to approximate the worst social cost within a multiplicative factor better than 2-2/(m+1).

Abstract: We study extreme Nash equilibria in the context of a selfish routing game. Specifically, we assume a collection of n users, each employing a mixed strategy, which is a probability distribution over m parallel identical links, to control the routing of its own assigned traffic. In a Nash equilibrium, each user selfishly routes its traffic on those links that minimize its expected latency cost. The social cost of a Nash equilibrium is the expectation, over all random choices of the users, of the maximum, over all links, latency through a link.We provide substantial evidence for the Fully Mixed Nash Equilibrium Conjecture, which states that the worst Nash equilibrium is the fully mixed Nash equilibrium, where each user chooses each link with positive probability. Specifically, we prove that the Fully Mixed Nash Equilibrium Conjecture is valid for pure Nash equilibria. Furthermore, we show, that under a certain condition, the social cost of any Nash equilibrium is within a factor of 2h(1 + {\aa}) of that of the fully mixed Nash equilibrium, where h is the factor by which the largest user traffic deviates from the average user traffic.Considering pure Nash equilibria, we provide a PTAS to approximate the best social cost, we give an upper bound on the worst social cost and we show that it is N P-hard to approximate the worst social cost within a multiplicative factor better than 2 - 2/(m + 1).

Abstract: A key issue when designing and implementing large-scale publish/subscribe systems is how to efficiently propagate subscriptions among the brokers of the system. Brokers require this information in order to forward incoming events only to interested users, filtering out unrelated events, which can save significant overheads (particularly network bandwidth and processing time at the brokers). In this paper we contribute the notion of subscription summaries, a mechanism appropriately compacting subscription information. We develop the associated data structures and matching algorithms. The proposed mechanism can handle event/subscription schemata that are rich in terms of their attribute types and powerful in terms of the allowed operations on them. Our major results are that the proposed mechanism (i) is scalable, with the bandwidth required to propagate subscriptions increasing only slightly, even at huge-scales, and (ii) is significantly more efficient, up to orders of magnitude, depending on the scale, with respect to the bandwidth requirements for propagating subscriptions.

Abstract: The content-based publish/subscribe (pub/sub)
paradigm for system design is becoming increasingly
popular, offering unique benefits for a large number of
data-intensive applications. Coupled with the peer-to-peer
technology, it can serve as a central building block for
such applications deployed over a large-scale network
infrastructure. A key problem toward the creation of
large-scale content-based pub/sub infrastructures relates to
dealing efficiently with continuous queries (subscriptions)
with rich predicates on string attributes; in this work we
study the problem of efficiently and accurately matching
substring queries to incoming events.

Abstract: In this work we consider temporal networks, i.e. networks defined by a labeling $\lambda$ assigning to each edge of an underlying graph G a set of discrete time-labels. The labels of an edge, which are natural numbers, indicate the discrete time moments at which the edge is available. We focus on path problems of temporal networks. In particular, we consider time-respecting paths, i.e. paths whose edges are assigned by $\lambda$ a strictly increasing sequence of labels. We begin by giving two efficient algorithms for computing shortest time-respecting paths on a temporal network. We then prove that there is a natural analogue of Menger’s theorem holding for arbitrary temporal networks. Finally, we propose two cost minimization parameters for temporal network design. One is the temporality of G, in which the goal is to minimize the maximum number of labels of an edge, and the other is the temporal cost of G, in which the goal is to minimize the total number of labels used. Optimization of these parameters is performed subject to some connectivity constraint. We prove several lower and upper bounds for the temporality and the temporal cost of some very basic graph families such as rings, directed acyclic graphs, and trees.

Abstract: In this work, we consider a \emph{solution of automata} similar to \emph{Population Protocols} and \emph{Network Constructors}. The automata (also called \emph{nodes}) move passively in a well-mixed solution without being capable of controlling their movement. However, the nodes can \emph{cooperate} by interacting in pairs. Every such interaction may result in an update of the local states of the nodes. Additionally, the nodes may also choose to connect to each other in order to start forming some required structure. We may think of such nodes as the \emph{smallest possible programmable pieces of matter}, like tiny nanorobots or programmable molecules. The model that we introduce here is a more applied version of Network Constructors, imposing \emph{physical} (or \emph{geometrical}) \emph{constraints} on the connections that the nodes are allowed to form. Each node can connect to other nodes only via a very limited number of \emph{local ports}, which implies that at any given time it has only a \emph{bounded number of neighbors}. Connections are always made at \emph{unit distance} and are \emph{perpendicular to connections of neighboring ports}. Though such a model cannot form abstract networks like Network Constructors, it is still capable of forming very practical \emph{2D or 3D shapes}. We provide direct constructors for some basic shape construction problems, like \emph{spanning line}, \emph{spanning square}, and \emph{self-replication}. We then develop \emph{new techniques} for determining the computational and constructive capabilities of our model. One of the main novelties of our approach, concerns our attempt to overcome the inability of such systems to detect termination. In particular, we exploit the assumptions that the system is well-mixed and has a unique leader, in order to \emph{give terminating protocols that are correct with high probability}. This allows us to develop terminating subroutines that can be \emph{sequentially composed} to form larger \emph{modular protocols} (which has not been the case in the relevant literature). One of our main results is a \emph{terminating protocol counting the size $n$ of the system} with high probability. We then use this protocol as a subroutine in order to develop our \emph{universal constructors}, establishing that \emph{it is possible for the nodes to become self-organized with high probability into arbitrarily complex shapes while still detecting termination of the construction}.

Abstract: We extend the population protocol model with a cover-time service that informs a walking state every time it covers the whole network. This represents a known upper bound on the cover time of a random walk. The cover-time service allows us to introduce termination into population protocols, a capability that is crucial for any distributed system. By reduction to an oracle-model we arrive at a very satisfactory lower bound on the computational power of the model: we prove that it is at least as strong as a Turing Machine of space log n with input commutativity, where n is the number of nodes in the network. We also give a log n-space, but nondeterministic this time, upper bound. Finally, we prove interesting similarities of this model to linear bounded automata.

Abstract: We extend the population protocol model with a cover-time service that informs a walking state every time it covers the whole network. This is simply a known upper bound on the cover time of a random walk. This allows us to introduce termination into population protocols, a capability that is crucial for any distributed system. By reduction to an oracle-model we arrive at a very satisfactory lower bound on the computational power of the model: we prove that it is at least as strong as a Turing Machine of space logn with input commutativity, where n is the number of nodes in the network. We also give a logn-space, but nondeterministic this time, upper bound. Finally, we prove interesting similarities of this model to linear bounded automata.

Abstract: We explore the capability of a network of extremely limited
computational entities to decide properties about any of its subnetworks.
We consider that the underlying network of the interacting
entities (devices, agents, processes etc.) is modeled by a complete in-
teraction graph and we devise simple graph protocols that can decide
properties of some input subgraph provided by some preprocessing on
the network. The agents are modeled as nite-state automata and run
the same global graph protocol. Each protocol is a xed size grammar,
that is, its description is independent of the size (number of agents) of
the network. This size is not known by the agents. We propose a simple
model, the Mediated Graph Protocol (MGP) model, similar to the Population
Protocol model of Angluin et al., in which each network link is
characterized by a state taken from a nite set. This state can be used
and updated during each interaction between the corresponding agents.
We provide some interesting properties of the MGP model among which
is the ability to decide properties on stabilizing (initially changing for a
nite number of steps) input graphs and we show that the MGP model
has the ability to decide properties of disconnected input graphs. We
show that the computational power within the connected components is
fairly restricted. Finally, we give an exact characterization of the class
GMGP, of graph languages decidable by the MGP model: it is equal
to the class of graph languages decidable by a nondeterministic Turing
Machine of linear space that receives its input graph by its adjacency
matrix representation.

Abstract: We explore the capability of a network of extremely limited computational entities to decide properties about itself or any of its subnetworks. We consider that the underlying network of the interacting entities (devices, agents, processes etc.) is modeled by an interaction graph that reflects the network’s connectivity. We examine the following two cases: First, we consider the case where the input graph is the whole interaction graph and second where it is some subgraph of the interaction graph given by some preprocessing on the network. In each case, we devise simple graph protocols that can decide properties of the input graph. The computational entities, that are called agents, are modeled as finite-state automata and run the same global graph protocol. Each protocol is a fixed size grammar, that is, its description is independent of the size (number of agents) of the network. This size is not known by the agents. We present two simple models (one for each case), the Graph Decision Mediated Population Protocol (GDMPP) and the Mediated Graph Protocol (MGP) models, similar to the Population Protocol model of Angluin et al., where each network link (edge of the interaction graph) is characterized by a state taken from a finite set. This state can be used and updated during each interaction between the corresponding agents. We provide some example protocols and some interesting properties for the two models concerning the computability of graph languages in various settings (disconnected input graphs, stabilizing input graphs). We show that the computational power within the family of all (at least) weakly-connected input graphs is fairly restricted. Finally, we give an exact characterization of the class of graph languages decidable by the MGP model in the case of complete interaction graphs: it is equal to the class of graph languages decidable by a nondeterministic Turing Machine of linear space that receives its input graph by its adjacency matrix representation.

Abstract: This work addresses the possibility or impossibility,and the corresponding costs,of devising
concurrent, low-contention implementations of atomicRead&Modify&Write (orRMW) operations
in a distributed system. A natural class of monotone RMW operations associated with monotone
groups,a certain class of algebraic groups introduced here,is considered. The popular Fetch&Add
and Fetch&Multiply operations are examples from the class.
A Monotone Linearizability Lemma is proved and employed as a chief combinatorial instrument in
this work; it establishes inherent ordering constraints of linearizability for a certain class of executions
of any distributed system implementing a monotone RMW operation.

Abstract: In this work we present the design of jWebDust, a
software environment for monitoring and controlling sensor networks via a web interface. Our software architecture provides a range of services that allow to create customized applications with minimum implementation effort that are easy to administrate. We present its open architecture, the most important design decisions, and discuss its distinct features and functionalities. jWebDust will allow heterogeneous components to operate in the same sensor network, and the integrated management and control of multiple such networks by defining web-based mechanisms to visualize the network state, the results of queries, and a means to inject queries in the network.

Abstract: # I. Caragiannis, C. Kaklamanis, P. Kanellopoulos, and M. Kyropoulou. In Proceedings of the 5th Workshop on Internet and Network Economics (WINE 09), LNCS, Springer, 2009, to appear.

Abstract: The Greek School Network (GSN) is a closed nationwide
educational network that offers advanced telematic and
networking services to all primary/secondary education schools
and administration offices in Greece. The primary objective of
GSN is the provisioning of a network infrastructure for the interconnection
of school PC laboratories so that modern educational
methods and pedagogical models can be applied to the school
community. GSN has scaled in size, has reached maturity, and
is currently delivering a wide range of network and telematic
services to its users. The emerging power of open-source software
provides a sound technological basis for building cutting-edge
services, capable of meeting internal administrative and monitoring
needs, and modern pedagogical requirements for tools and
services. The current paper presents an overview of GSN and an
evaluation of its services based on the opinions of its users, and on
service utilization and traffic measurement statistics. The paper
reaches the conclusion that open-source solutions provide a sound
technological platform that can cover, to a great extent, the needs
for advanced educational services of the school community.

Abstract: We propose and evaluate the performance of a new MAC-layer protocol for mobile ad hoc networks, called the Slow Start Power Controlled (abbreviated SSPC) protocol. SSPC improves on IEEE 802.11 by using power control for the RTS/CTS and DATA frame transmissions, so as to reduce energy consumption and increase network throughput and lifetime. In our scheme the transmission power used for the RTS frames is not constant, but follows a slow start principle. The CTS frames, which are sent at maximum transmission power, prevent the neighbouring nodes from transmitting their DATA frames at power levels higher than a computed threshold, while allowing them to transmit at power levels less than that threshold. Reduced energy consumption is achieved by adjusting the node transmission power to the minimum required value for reliable reception at the receiving node, while increase in network throughput is achieved by allowing more transmissions to take place simultaneously. The slow start principle used for calculating the appropriate DATA frames transmission power and the possibility of more simultaneous collision-free transmissions differentiate the SSPC protocol from the other MAC solutions proposed for IEEE 802.11. Simulation results indicate that the SSPC protocol achieves a significant reduction in power consumption, average packet delay and frequency of RTS frame collisions, and a significant increase in network throughput and received-to-sent packets ratio compared to IEEE 802.11 protocol.

Abstract: In this paper we solve a particular instance of a class of differential equations that appears not to be
widely documented in the literature and results from the theoretical analysis of a probabilistic key agreement protocol parameterized by an integer k > 1. This class consists of differential equations of the form dy(t) dt = Pk+1 i=0 fi(t)y(t)i, with k > 1 and fi(t); 0 · i · k + 1 specific real functions of t. This form appears to generalize the class of Abel differential equations of the first kind in that it involves the sought function to powers greater than
3, for k > 2. For k > 2 the class of differential equations is not amenable to the general solution methodology of the Abel class of differential equations. To the best of our knowledge, there are no previous efforts to tackle, analytically, differential equations of this form. In this paper we focus on the case k = 3. We show that the solution to the resulting differential equation may be written through the use of a class of Generalized Hyper-
Lambert functions. These functions generalize the well known LambertW function, which frequently arises in the analysis of engineering applications. The solution to the differential equation is achieved, interestingly, through a reduction to a particular real root of the polynomial of degree four that appears in the right hand-side of the differential equation. This technique can be of independent interest and it can be, more generally, applicable to other differential equations of a similar form not amenable to known analytic techniques.

Abstract: We consider random systems of linear equations over GF(2) in which every equation binds k variables. We obtain a precise description of the clustering of solutions in such systems. In particular, we prove that with probability that tends to 1 as the number of variables, n, grows: for every pair of solutions \sigma, \tau, either there exists a sequence of solutions \sigma,...,\tau, in which successive elements differ by O(log n) variables, or every sequence of solutions \sigma,...,\tau, contains a step requiring the simultaneous change of \Omega(n) variables. Furthermore, we determine precisely which pairs of solutions are in each category. Our results are tight and highly quantitative in nature. Moreover, our proof highlights the role of unique extendability as the driving force behind the success of Low Density Parity Check codes and our techniques also apply to the problem of so-called pseudo-codewords in such codes.

Abstract: We present an improved upper bound on the competitiveness of the online colouring algorithm First-Fit in disk graphs, which are graphs representing overlaps of disks on the plane. We also show that this bound is best possible for deterministic online colouring algorithms that do not use the disk representation of the input graph. We also present a related new lower bound for unit disk graphs.

Abstract: Peer-to-peer sharing systems are becoming
increasingly popular and an exciting new class of
innovative, internet-based data management
systems. In these systems, users contribute their
own resources (processing units and storage
devices) and content (i.e., documents) to the P2P
community. We focus on the management of
content and resources in such systems. Our goal
is to harness all available resources in the P2P
network so that the users can access all available
content efficiently. Efficiency is taken both from
(i) the point of view of the system, in that we
strive to ensure fair load distribution among all
peer nodes, and (ii) from the point of view of the
users, in that we strive to ensure low user-request
response times.
We propose a novel architecture for this new
class of applications, which differs drastically
from what is either found currently in existing
products or proposed in academia. We contribute
and study novel solutions that achieve our goals,
while at the same time addressing the formidable
challenges due to the autonomy of peers, their
heterogeneous processing and storage capacities,
their different content contributions, the huge
system scale, and the highly dynamic system
environment.

Abstract: In this paper, we discuss the integration of Wireless
Sensor Networks (WSN) and smart objects with the Web. We present a set of research challenges which we believe are the most important ones rising from this integration and propose a prototype system, Uberdust, which addresses such challenges. Uberdust is a brokerage web service for connecting smart objects to the Internet of Things, providing storage, sharing and discovery of real-time and historical data from smart objects, devices & building installations around the world via the Web. Our system provides high-level language-independent APIs so IoT application developers may choose their favorite programming or scripting languages.

Abstract: In this work, we introduce the notion of time to some well-known combinatorial optimization problems. In particular, we study problems defined on temporal graphs. A temporal graph D=(V,A) may be viewed as a time-sequence G_1,G_2,...,G_l of static graphs over the same (static) set of nodes V. Each G_t = D(t) = (V,A(t)) is called the instance of D at time t and l is called the lifetime of D. Our main focus is on analogues of traveling salesman problems in temporal graphs. A sequence of time-labeled edges (e.g. a tour) is called temporal if its labels are strictly increasing. We begin by considering the problem of exploring the nodes of a temporal graph as soon as possible. In contrast to the positive results known for the static case, we prove that, it cannot be approximated within cn, for some constant c > 0, in general temporal graphs and within (2 − \varepsilon), for every constant \varepsilon > 0, in the special case in which D(t) is connected for all 1 <= t <= l, both unless P = NP. We then study the temporal analogue of TSP(1,2), abbreviated TTSP(1,2), where, for all 1 <= t <= l, D(t) is a complete weighted graph with edge-costs from {1,2} and the cost of an edge may vary from instance to instance. The goal is to find a minimum cost temporal TSP tour. We give several polynomial-time approximation algorithms for TTSP(1,2). Our best approximation is (1.7 + \varepsilon) for the generic TTSP(1,2) and (13/8 + \varepsilon) for its interesting special case in which the lifetime of the temporal graph is restricted to n. In the way, we also introduce temporal versions of Maximum Matching, Path Packing, Max-TSP, and Minimum Cycle Cover, for which we obtain polynomial-time approximation algorithms and hardness results.

Abstract: Today we are experiencing a major reconsideration of the computing
paradigm, as witnessed by the abundance and increasing frequency
of use of terms such as {\em ambient intelligence}, {\em ubiquitous computing}, {\em disappearing computer}, {\em grid
computer}, {\em global computing} and {\em mobile ad-hoc
networks}. Systems that can be described with such terms are of a
dynamic, with no clear physical boundary, nature and it seems that
it is impossible (or, at least, difficult) to define sharply a
number of important properties holding with certainty as well as
holding throughout the whole lifetime of the system.
%
One such system property, which is important for the viability of
a system, is {\em trust}. Our departure point is the assumption
that it seems very difficult to define static system properties
related to trust and expect that they hold eternally in the
rapidly changing systems falling under the new computing paradigm.
One should, rather, attempt to define trust in terms of properties
that hold with some limiting probability as the the system grows
and try to establish conditions that ensure that ``good''
properties hold {\em almost certainly}. Based on this viewpoint,
in this paper we provide a new framework for defining trust
through formally definable properties that hold, almost certainly,
in the limit in randomly growing combinatorial structures that
model ``shapeless'' computing systems (e.g. ad-hoc networks),
drawing on results that establish the threshold behavior of
predicates written in the first and second order logic.

Abstract: Digital optical logic circuits capable of performing bit-wise signal processing are critical building blocks for the realization of future high-speed packet-switched networks. In this paper, we present recent advances in all-optical processing circuits and examine the potential of their integration into a system environment. On this concept, we demonstrate serial all-optical Boolean AND/XOR logic at 20 Gb/s and a novel all-optical packet clock recovery circuit, with low capturing time, suitable for burst-mode traffic. The circuits use the semiconductor-based ultrafast nonlinear interferometer (UNI) as the nonlinear switching element. We also present the integration of these circuits in a more complex unit that performs header and payload separation from short synchronous data packets at 10 Gb/s. Finally, we discuss a method to realize a novel packet scheduling switch architecture, which guarantees lossless communication for specific traffic burstiness constraints, using these logic units.

Abstract: Recent activity in the field of Internet-of-Things experimentation has focused on the federation of discrete testbeds, thus placing less effort in the integration of other related technologies, such as smartphones; also, while it is gradually moving to more application-oriented paths, such as urban settings, it has not dealt in large with applications having social networking features. We argue here that current IoT infrastructure, testbeds and related software technologies should be used in such a context, capturing real-world human mobility and social networking interactions, for use in evaluating and fine-tuning realistic mobility models and designing human-centric applications. We discuss a system for producing traces for a new generation of human-centric applications, utilizing technologies such as Bluetooth and focusing on human interactions. We describe the architecture for this system and the respective implementation details presenting two distinct deployments; one in an office environment and another in an exhibition/conference event (FET'11, The European Future Technologies Conference and Exhibition) with 103 active participants combined, thus covering two popular scenarios for human centric applications. Our system provides online, almost real-time, feedback and statistics and its implementation allows for rapid and robust deployment, utilizing mainstream technologies and components.

Abstract: The Internet of Things (IoT) and smart cities are two of the most popular directions the research community is pursuing very actively. But although we have made great progress in many fields, we are still trying to figure out how we can utilize our smart city and IoT infrastructures, in order to produce reliable, economically sustainable solutions that create public value, and even more so in the field of education.
GAIA1, a Horizon2020 EC-funded project, has developed an IoT infrastructure across school buildings in Europe. Its primary aim has been to raise awareness about energy consumption and sustainability, based on real-world sensor data produced inside the school buildings where students and teachers live and work. Today's students are the citizens of tomorrow, and they should have the skills to understand and respond to challenges like climate change. Currently, 25 educational building sites participate in GAIA, located in Sweden, Italy, and Greece. An IoT infrastructure [1] is installed in these buildings, monitoring in real-time their power consumption, as well as several indoor and outdoor environmental parameters.

Abstract: In this work we present two mobile, locative and collaborative distributed games that are played using wireless sensor
devices. We brieﬂy present the architecture of the two games
and demonstrate their capabilities. The key characteristic of
these games is that players interact with each other and their
surrounding environment by moving, running and gesturing as a mean to perform game related actions, using sensor devices. We demonstrate our system by implementing it using
a combination of JAVA Standard and Mobile editions.

Abstract: Experimentally driven research for wireless sensor networks is invaluable to provide benchmarking and comparison of new ideas. An increasingly common tool in support of this is a testbed composed of real hardware devices which increases the realism of evaluation. However, due to hardware costs the size and heterogeneity of these testbeds is usually limited. In addition, a testbed typically has a relatively static configuration in terms of its network topology and its software support infrastructure, which limits the utility of that testbed to specific case-studies. We propose a novel approach that can be used to (i) interconnect a large number of small testbeds to provide a federated testbed of very large size, (ii) support the interconnection of heterogeneous hardware into a single testbed, and (iii) virtualise the physical testbed topology and thus minimise the need to relocate devices. We present the most important design issues of our approach and evaluate its performance. Our results indicate that testbed virtualisation can be achieved with high efficiency and without hindering the realism of experiments.

Abstract: We study computationally hard combinatorial problems arising from the important engineering question of how to maximize the number of connections that can be simultaneously served in a WDM optical network. In such networks, WDM technology can satisfy a set of connections by computing a route and assigning a wavelength to each connection so that no two connections routed through the same fiber are assigned the same wavelength. Each fiber supports a limited number of w wavelengths and in order to fully exploit the parallelism provided by the technology, one should select a set connections of maximum cardinality which can be satisfied using the available wavelengths. This is known as the maximum routing and path coloring problem (maxRPC).
Our main contribution is a general analysis method for a class of iterative algorithms for a more general coloring problem. A lower bound on the benefit of such an algorithm in terms of the optimal benefit and the number of available wavelengths is given by a benefit-revealing linear program. We apply this method to maxRPC in both undirected and bidirected rings to obtain bounds on the approximability of several algorithms. Our results also apply to the problem maxPC where paths instead of connections are given as part of the input. We also study the profit version of maxPC in rings where each path has a profit and the objective is to satisfy a set of paths of maximum total profit.

Abstract: We present the basic concepts behind the design and implementation of WebDust, a peer-to-peer platform for organizing,
monitoring and controlling wireless sensor networks, along with a discussion of its application regarding an actual testbed.
Our software architecture provides a range of services that allow to create customized applications with relatively low
implementation overhead. WebDust aims to allow heterogeneous components to operate in the same sensor network, and
give the ability to manage and control large numbers of such networks, possibly on a global scale. We also give insight to
several applications that can be implemented using our platform, and a description of our current testbed.

Abstract: A Nash equilibrium of a routing network represents a stable state of the network where no user finds it beneficial to unilaterally deviate from its routing strategy. In this work, we investigate the structure of such equilibria within the context of a certain game that models selfish routing for a set of n users each shipping its traffic over a network consisting of m parallel links. In particular, we are interested in identifying the worst-case Nash equilibrium – the one that maximizes social cost. Worst-case Nash equilibria were first introduced and studied in the pioneering work of Koutsoupias and Papadimitriou [9].
More specifically, we continue the study of the Conjecture of the Fully Mixed Nash Equilibrium, henceforth abbreviated as FMNE Conjecture, which asserts that the fully mixed Nash equilibrium, when existing, is the worst-case Nash equilibrium. (In the fully mixed Nash equilibrium, the mixed strategy of each user assigns (strictly) positive probability to every link.) We report substantial progress towards identifying the validity, methodologies to establish, and limitations of, the FMNE Conjecture.