Abstract: We describe the design and implementation
of a secure and robust architectural data management
model suitable for cultural environments. Usage and exploitation
of the World Wide Web is a critical requirement
for a series of administrative tasks such as collecting, managing
and distributing valuable cultural and artistic information.
This requirement introduces a great number of
Internet threats for cultural organizations that may cause
huge data and/or financial losses, harm their reputation
and public acceptance as well as people’s trust on them.
Our model addresses a list of fundamental operational
and security requirements. It utilizes a number of cryptographic
primitives and techniques that provide data safety
and secure user interaction on especially demanding online
collaboration environments. We provide a reference
implementation of our architectural model and discuss
the technical issues. It is designed as a standalone solution
but it can be flexibly adapted in broader management
infrastructures.
Abstract: Usage and exploitation of the Internet is a critical requirement for managing and distributing valuable digital assets. This requirement introduces a great number of threats for commercial (or not) organizations that may cause huge data and financial losses, harm their reputation as well as people's trust on them. In this paper we present the research challenges for secure digital asset management over the web by proposing a model that provides data safety and secure user interaction on especially demanding on-line collaboration environments.
Abstract: Designing wireless sensor networks is inherently complex; many aspects such as energy efficiency, limited resources, decentralized collaboration, fault tolerance have to be tackled. To be effective and to produce applicable results, fundamental research has to be tested, at least as a proof-of-concept, in large scale environments, so as to assess the feasibility of the new concepts, verify their large scale effects (not only at technological level, but also as for their foreseeable implications on users, society and economy) and derive further requirements, orientations and inputs for the research. In this paper we focus on the problems of interconnecting existing testbed environments via the Internet and providing a virtual unifying laboratory that will support academia, research centers and industry in their research on networks and services. In such a facility important issues of trust, security, confidentiality and integrity of data may arise especially for commercial (or not) organizations. In this paper we investigate such issues and present the design of a secure and robust architectural model for interconnecting testbeds of wireless sensor networks.
Abstract: We argue the case for a new paradigm for architecting structured P2P overlay networks, coined AESOP. AESOP consists of 3 layers: (i) an architecture, PLANES, that ensures significant performance speedups, assuming knowledge of altruistic peers; (ii) an accounting/auditing layer, AltSeAl, that identifies and validates altruistic peers; and (iii) SeAledPLANES, a layer that facilitates the coordination/collaboration of the previous two components. We briefly present these components along with experimental and analytical data of the promised significant performance gains and the related overhead. In light of these very encouraging results, we put this three-layer architecture paradigm forth as the way to structure the P2P overlay networks of the future.
Abstract: Wireless Sensor Networks consist of a large number of small, autonomous devices, that are able to interact with their inveronment by sensing and collaborate to fulfill their tasks, as, usually, a single node is incapable of doing so; and they use wireless communication to enable this collaboration. Each device has limited computational and energy resources, thus a basic issue in the applicastions of wireless sensor networks is the low energy consumption and hence, the maximization of the network lifetime.
The collected data is disseminated to a static control point – data sink in the network, using node to node - multi-hop data propagation. However, sensor devices consume significant amounts of energy in addition to increased implementation complexity, since a routing protocol is executed. Also, a point of failure emerges in the area near the control center where nodes relay the data from nodes that are farther away. Recently, a new approach has been developed that shifts the burden from the sensor nodes to the sink. The main idea is that the sink has significant and easily replenishable energy reserves and can move inside the area the sensor network is deployed, in order to acquire the data collected by the sensor nodes at very low energy cost. However, the need to visit all the regions of the network may result in large delivery delays.
In this work we have developed protocols that control the movement of the sink in wireless sensor networks with non-uniform deployment of the sensor nodes, in order to succeed an efficient (with respect to both energy and latency) data collection. More specifically, a graph formation phase is executed by the sink during the initialization: the network area is partitioned in equal square regions, where the sink, pauses for a certain amount of time, during the network traversal, in order to collect data.
We propose two network traversal methods, a deterministic and a random one. When the sink moves in a random manner, the selection of the next area to visit is done in a biased random manner depending on the frequency of visits of its neighbor areas. Thus, less frequently visited areas are favored. Moreover, our method locally determines the stop time needed to serve each region with respect to some global network resources, such as the initial energy reserves of the nodes and the density of the region, stopping for a greater time interval at regions with higher density, and hence more traffic load. In this way, we achieve accelerated coverage of the network as well as fairness in the service time of each region.Besides randomized mobility, we also propose an optimized deterministic trajectory without visit overlaps, including direct (one-hop) sensor-to-sink data transmissions only.
We evaluate our methods via simulation, in diverse network settings and comparatively to related state of the art solutions. Our findings demonstrate significant latency and energy consumption improvements, compared to previous research.
Abstract: For a place that gathers millions of people theWeb seems
pretty lonely at times. This is mainly due to the current predominant
browsing scenario; that of an individual participating
in an autonomous surfing session. We believe that
people should be seen as an integral part of the browsing
and searching activity towards a concept known as social
navigation. In this work, we extend the typical web
browser˘s functionality so as to raise awareness of other
people having similar web surfing goals at the current moment.
We further present features and algorithms that facilitate
online communication and collaboration towards common
searching targets. The utility of our system is established
by experimental studies. The extensions we present
can be easily adopted in a typical web browser.
Abstract: For a place that gathers millions of people the Web seems pretty lonely at times. This is mainly due to the current predominant browsing scenario; that of an individual participating in an autonomous surfing session. We believe that people should be seen as an integral part of the browsing and searching activity towards a concept known as social navigation. Based on this observation we present iClone (www.iclone.com), a social web browser that is able to raise awareness of other people surfing similar websites at the same time by utilizing temporal correlations of their web history logs and to facilitate online communication and collaboration.
Abstract: This paper describes recent research activities and results in the area of photonic switching
carried out within the Virtual Department on Switching (VDS) of the European e-Photon/
ONe Network of Excellence. Contributions from outstanding European research groups in
this field are collected to offer a platform for future research in optical switching. The paper
contains the main topics related to network scenarios, switch architectures and experiments,
with an effort to investigate synergies and challenging opportunities for collaboration
and integration of research expertise in the field.
Abstract: Recent rapid technological developments have led to the
development of tiny, low-power, low-cost sensors. Such devices
integrate sensing, limited data processing and communication
capabilities.The effective distributed collaboration
of large numbers of such devices can lead to the efficient
accomplishment of large sensing tasks.
This talk focuses on several aspects of energy efficiency.
Two protocols for data propagation are studied: the first
creates probabilistically optimized redundant data transmissions
to combine energy efficiency with fault tolerance,
while the second guarantees (in a probabilistic way) the
same per sensor energy dissipation, towards balancing the
energy load and prolong the lifetime of the network.
A third protocol (in fact a power saving scheme) is also
presented, that directly and adaptively affects power dissipation
at each sensor. This “lower level” scheme can be
combined with data propagation protocols to further improve
energy efficiency.
Abstract: Numerous research efforts have produced a large number of algorithms and mechanisms for web proxy caches. In order to build powerful web proxies and understand their performance, one must be able to appreciate the impact and significance of earlier contributions and how they can be integrated. To do this we employ a cache replacement algorithm, 'CSP', which integrates key knowledge from previous work. CSP utilizes the communication Cost to fetch web objects, the objects' Sizes, their Popularities, an auxiliary cache and a cache admission control algorithm. We study the impact of these components with respect to hit ratio, latency, and bandwidth requirements.
Numerous research efforts have produced a large number of algorithms and mechanisms for web proxy caches. In order to build powerful web proxies and understand their performance, one must be able to appreciate the impact and significance of earlier contributions and how they can be integrated To do this we employ a cache replacement algorithm, 'CSP, which integrates key knowledge from previous work. CSP utilizes the communication Cost to fetch web objects, the objects' Sizes, their Popularifies, an auxiliary cache and a cache admission control algorithm. We study the impact of these components with respect to hit ratio, latency, and bandwidth requirements. Our results show that there are clear performance gains when utilizing the communication cost, the popularity of objects, and the auxiliary cache. In contrast, the size of objects and the admission controller have a negligible performance impact. Our major conclusions going against those in related work are that (i) LRU is preferable to CSP for important parameter values, (ii) accounting for the objects' sizes does not improve latency and/or bandwidth requirements, and (iii) the collaboration of nearby proxies is not very beneficial. Based on these results, we chart the problem solution space, identifying which algorithm is preferable and under which conditions. Finally, we develop a dynamic replacement algorithm that continuously utilizes the best algorithm as the problem-parameter values (e.g., the access distributions) change with time.