Abstract: In this paper we present a protocol for Certified E-Mail that ensures temporal authentication. We first slightly modify a previously known three-message optimistic protocol in order to obtain a building block that meets some properties. We then extend this basic protocol enhancing it with the temporal authentication by adding a single message, improving the message complexity of known protocols. The fairness of the protocol is ensured by an off-line Trusted third party that joins the protocol only in case one of the players misbehaves. In order to guarantee temporal authentication we assume the existance of an on-line time stamping server.
Abstract: Divisible load scenarios occur in modern media server applications since most multimedia applications typically require access to continuous and discrete data. A high performance Continuous Media (CM) server greatly depends on the ability of its disk IO subsystem to serve both types of workloads efficiently. Disk scheduling algorithms for mixed media workloads, although they play a central role in this task, have been overlooked by related research efforts. These algorithms must satisfy several stringent performance goals, such as achieving low response time and ensuring fairness, for the discrete-data workload, while at the same time guaranteeing the uninterrupted delivery of continuous data, for the continuous-data workload. The focus of this paper is on disk scheduling algorithms for mixed media workloads in a multimedia information server. We propose novel algorithms, present a taxonomy of relevant algorithms, and study their performance through experimentation. Our results show that our algorithms offer drastic improvements in discrete request average response times, are fair, serve continuous requests without interruptions, and that the disk technology trends are such that the expected performance benefits can be even greater in the future.
Abstract: Wireless Sensor Networks consist of a large number of small, autonomous devices, that are able to interact with their inveronment by sensing and collaborate to fulfill their tasks, as, usually, a single node is incapable of doing so; and they use wireless communication to enable this collaboration. Each device has limited computational and energy resources, thus a basic issue in the applicastions of wireless sensor networks is the low energy consumption and hence, the maximization of the network lifetime.
The collected data is disseminated to a static control point – data sink in the network, using node to node - multi-hop data propagation. However, sensor devices consume significant amounts of energy in addition to increased implementation complexity, since a routing protocol is executed. Also, a point of failure emerges in the area near the control center where nodes relay the data from nodes that are farther away. Recently, a new approach has been developed that shifts the burden from the sensor nodes to the sink. The main idea is that the sink has significant and easily replenishable energy reserves and can move inside the area the sensor network is deployed, in order to acquire the data collected by the sensor nodes at very low energy cost. However, the need to visit all the regions of the network may result in large delivery delays.
In this work we have developed protocols that control the movement of the sink in wireless sensor networks with non-uniform deployment of the sensor nodes, in order to succeed an efficient (with respect to both energy and latency) data collection. More specifically, a graph formation phase is executed by the sink during the initialization: the network area is partitioned in equal square regions, where the sink, pauses for a certain amount of time, during the network traversal, in order to collect data.
We propose two network traversal methods, a deterministic and a random one. When the sink moves in a random manner, the selection of the next area to visit is done in a biased random manner depending on the frequency of visits of its neighbor areas. Thus, less frequently visited areas are favored. Moreover, our method locally determines the stop time needed to serve each region with respect to some global network resources, such as the initial energy reserves of the nodes and the density of the region, stopping for a greater time interval at regions with higher density, and hence more traffic load. In this way, we achieve accelerated coverage of the network as well as fairness in the service time of each region.Besides randomized mobility, we also propose an optimized deterministic trajectory without visit overlaps, including direct (one-hop) sensor-to-sink data transmissions only.
We evaluate our methods via simulation, in diverse network settings and comparatively to related state of the art solutions. Our findings demonstrate significant latency and energy consumption improvements, compared to previous research.
Abstract: Counting items in a distributed system, and estimating the cardinality of multisets in particular,
is important for a large variety of applications and a fundamental building block for emerging Internet-scale information systems. Examples of such applications range from optimizing query access plans in peer-to-peer data sharing, to computing the significance (rank/score) of data items in distributed information retrieval. The general formal problem addressed in this article is computing the network-wide distinct number of items with some property (e.g., distinct files with file name
containing “spiderman”) where each node in the network holds an arbitrary subset, possibly overlapping the subsets of other nodes. The key requirements that a viable approach must satisfy are:
(1) scalability towards very large network size, (2) efficiency regarding messaging overhead, (3) load
balance of storage and access, (4) accuracy of the cardinality estimation, and (5) simplicity and easy
integration in applications. This article contributes the DHS (Distributed Hash Sketches) method
for this problem setting: a distributed, scalable, efficient, and accurate multiset cardinality estimator.
DHSis based on hash sketches for probabilistic counting, but distributes the bits of each counter
across network nodes in a judicious manner based on principles of Distributed Hash Tables, paying
careful attention to fast access and aggregation as well as update costs. The article discusses various
design choices, exhibiting tunable trade-offs between estimation accuracy, hop-count efficiency, and
load distribution fairness. We further contribute a full-fledged, publicly available, open-source implementation of all our methods, and a comprehensive experimental evaluation for various settings.
Abstract: We address the issue of measuring distribution fairness in Internet-scale networks. This problem has several interesting instances encountered in different applications, ranging from assessing the distribution of load between network nodes for load balancing purposes, to measuring node utilization for optimal resource exploitation, and to guiding autonomous decisions of nodes in networks built with market-based economic principles. Although some metrics have been proposed, particularly for assessing load balancing algorithms, they fall short. We first study the appropriateness of various known and previously proposed statistical metrics for measuring distribution fairness. We put forward a number of required characteristics for appropriate metrics. We propose and comparatively study the appropriateness of the Gini coefficient (G) for this task. Our study reveals as most appropriate the metrics of G, the fairness index (FI), and the coefficient of variation (CV) in this order. Second, we develop six distributed sampling algorithms to estimate metrics online efficiently, accurately, and scalably. One of these algorithms (2-PRWS) is based on two effective optimizations of a basic algorithm, and the other two (the sequential sampling algorithm, LBS-HL, and the clustered sampling one, EBSS) are novel, developed especially to estimate G. Third, we show how these metrics, and especially G, can be readily utilized online by higher-level algorithms, which can now know when to best intervene to correct unfair distributions (in particular, load imbalances). We conclude with a comprehensive experimentation which comparatively evaluates both the various proposed estimation algorithms and the three most appropriate metrics (G, CV, andFI). Specifically, the evaluation quantifies the efficiency (in terms of number of the messages and a latency indicator), precision, and accuracy achieved by the proposed algorithms when estimating the competing fairness metrics. The central conclusion is that the proposed metric, G, can be estimated with a small number of messages and latency, regardless of the skew of the underlying distribution.
Abstract: Two important performance parameters of distributed, rate-based flow control algorithms are their locality and convergence complexity. The former is characterized by the amount of global knowledge that is available to their scheduling mechanisms, while the latter is defined as the number of update operations performed on rates of individual sessions until max-min fairness is reached. Optimistic algorithms allow any session to intermediately receive a rate larger than its max-min fair rate; bottleneck algorithms finalize the rate of a session only if it is restricted by a certain, highly congested link of the network. In this work, we present a comprehensive collection of lower and upper bounds on convergence complexity, under varying degrees of locality, for optimistic, bottleneck, rate-based flow control algorithms. Say that an algorithm is oblivious if its scheduling mechanism uses no information of either the session rates or the network topology. We present a novel, combinatorial construction of a capacitated network, which we use to establish a fundamental lower bound of dn 4 + n 2 on the convergence complexity of any oblivious algorithm, where n is the number of sessions laid out on a network, and d, the session dependency, is a measure of topological dependencies among sessions. Moreover, we devise a novel simulation proof to establish that, perhaps surprisingly, the lower bound of dn 4 + n 2 on convergence complexity still holds for any partially oblivious algorithm, in which the scheduling mechanism is allowed to use information about session rates, but is otherwise unaware of network topology. On the positive side, we prove that the lower bounds for oblivious and partially oblivious algorithms are both tight. We do so by presenting optimal oblivious algorithms, which converge after dn 2 + n 2 update operations are performed in the worst case. To complete the picture, we show that linear convergence complexity can indeed be achieved if information about both session rates and network topology is available to schedulers. We present a counterexample, nonoblivious algorithm, which converges within an optimal number of n update operations. Our results imply a surprising convergence complexity collapse of oblivious and partially oblivious algorithms, and a convergence complexity separation between (partially) oblivious and nonoblivious algorithms for optimistic, bottleneck rate-based flow control.
Abstract: We describe the design and implementation of secure and robust protocol and system for a national electronic lottery. Electronic lotteries at a national level are a viable cost effective alternative to mechanical ones when there is a business need to support many types of rdquogames of chancerdquo and to allow increased drawing frequency. Electronic lotteries are, in fact, extremely high risk financial application: If one discovers a way to predict or otherwise claim the winning numbers (even once) the result is huge financial damages. Moreover, the e-lottery process is complex, which increases the possibility of fraud or costly accidental failures. In addition, a national lottery must adhere to auditability and (regulatory) fairness requirements regarding its drawings. Our mechanism, which we believe is the first one of its kind to be described in the literature, builds upon a number of cryptographic primitives that ensure the unpredictability of the winning numbers, the prevention of their premature leakages and prevention of fraud. We also provide measures for auditability, fairness, and trustworthiness of the process. Besides cryptography, we incorporate security mechanisms that eliminate various risks along the entire process. Our system which was commissioned by a national organization, was implemented in the field and has been operational and active for a while, now.
Abstract: We study the problem of fair resource allocation in a simple cooperative multi-agent setting where we have k agents and a set of n objects to be allocated to those agents. Each object is associated with a weight represented by a positive integer or real number. We would like to allocate all objects to the agents so that each object is allocated to only one agent and the weight is distributed fairly. We adopt the fairness index popularized by the networking community as our measure of fairness, and study centralized algorithms for fair resource allocation. Based on the relationship between our problem and number partitioning, we devise a greedy algorithm for fair resource allocation that runs in polynomial time but is not guaranteed to find the optimal solution, and a complete anytime algorithm that finds the optimal solution but runs in exponential time. Then we study the phase transition behavior of the complete algorithm. Finally, we demonstrate that the greedy algorithm actually performs very well and returns almost perfectly fair allocations.
Abstract: We address the issue of measuring storage, or query load distribution fairness in peer-to-peer data management systems. Existing metrics may look promising from the point of view of specific peers, while in reality being far from optimal from a global perspective. Thus, first we define the requirements and study the appropriateness of various statistical metrics for measuring load distribution fairness towards these requirements. The metric proposed as most appropriate is the Gini coefficient (G). Second, we develop novel distributed sampling algorithms to compute G on-line, with high precision, efficiently, and scalably. Third, we show how G can readily be utilized on-line by higher-level algorithms which can now know when to best intervene to correct load imbalances. Our analysis and experiments testify for the efficiency and accuracy of these algorithms, permitting the online use of a rich and reliable metric, conveying a global perspective of the distribution.
Abstract: Flow control is the main technique currently used to prevent some of the ordered traffic from entering a communication network, and to avoid congestion. A challenging aspect of flow control is how to treat all sessions "fairly " when it is necessary to turn traffic away from the network. In this work, we show how to extend the theory of max-min fair flow control to the case where priorities are assigned to different varieties of traffic, which are sensitive to traffic levels. We examine priorities expressible in the general form of increasing functions of rates, considering yet in combination the more elaborative case with unescapable upper and lower bounds on rates of traffic sessions. We offer optimal, priority bottleneck algorithms, which iteratively adjust the session rates in order to meet a new condition of max-min fairness under priorities and rate bounds. In our setting, which is realistic for today's technology of guaranteed quality of service, traffic may be turned away not only to avoid congestion, but also to respect particular minimum requirements on bandwidth. Moreover, we establish lower bounds on the competitiveness of network-oblivious schemes compared to optimal schemes with complete knowledge of network structure. Our theory extends significantly the classical theory of max-min fair flow control [2]. Moreover, our results on rejected traffic are fundamentally different from those related to call control and bandwidth allocation, since not only do we wish to optimize the number and rates of accepted sessions, but we also require priority fairness.
Abstract: In this work we present three new distributed, probabilistic data propagation protocols for Wireless Sensor Networks which aim at maximizing the network's operational life and improve its performance. The keystone of these protocols' design is fairness which declares that fair portions of network's work load should be assigned to each node, depending on their role in the system. All the three protocols, EFPFR, MPFR and TWIST, emerged from the study of the rigorously analyzed protocol PFR. Its design elements were identified and improvements were suggested and incorporated into the introduced protocols. The experiments conducted show that our proposals manage to improve PFR's performance in terms of success rate, total amount of energy saved, number of alive sensors and standard deviation of the energy left. Indicatively we note that while PFR's success rate is 69.5%, TWIST is achieving 97.5% and its standard deviation of energy is almost half of that of PFR.
Abstract: Grids offer a transparent interface to geographically scattered computation, communication, storage and
other resources. In this chapter we propose and evaluate QoS-aware and fair scheduling algorithms for
Grid Networks, which are capable of optimally or near-optimally assigning tasks to resources, while taking
into consideration the task characteristics and QoS requirements. We categorize Grid tasks according to
whether or not they demand hard performance guarantees. Tasks with one or more hard requirements are
referred to as Guaranteed Service (GS) tasks, while tasks with no hard requirements are referred to as Best
Effort (BE) tasks. For GS tasks, we propose scheduling algorithms that provide deadline or computational
power guarantees, or offer fair degradation in the QoS such tasks receive in case of congestion. Regarding
BE tasks our objective is to allocate resources in a fair way, where fairness is interpreted in the max-min fair
share sense. Though, we mainly address scheduling problems on computation resources, we also look at
the joint scheduling of communication and computation resources and propose routing and scheduling
algorithms aiming at co-allocating both resource type so as to satisfy their respective QoS requirements.
Abstract: Future Grid Networks should be able to provide Quality of Service (QoS) guarantees
to their users. In this work we examine the way Grid resources should be
configured so as to provide deterministic delay guarantees to Guaranteed Service
(GS) users and fairness to Best Effort (BE) users. The resources are partitioned
in groups that serve GS users only, or BE users only, or both types of users with
different priorities. Furthermore, the GS users are registered to the resources
either statically or dynamically, while both single and multi-Cpu resources are
examined. Finally the proposed resource configurations for providing QoS are
implemented in theGridSim environment and a number simulations are executed.
Our results indicate that the allocation of resources to both types of users, with
different priorities, results in fewer deadlines missed and better resources utilization.
Finally benefits can be derived from the dynamic registration of GS users
to the resources
Abstract: We study the impact of fairness on the e±ciency of allo-
cations. We consider three di®erent notions of fairness, namely propor-
tionality, envy-freeness, and equitability for allocations of divisible and
indivisible goods and chores. We present a series of results on the price of
fairness under the three di®erent notions that quantify the e±ciency loss
in fair allocations compared to optimal ones. Most of our bounds are ei-
ther exact or tight within constant factors. Our study is of an optimistic
nature and aims to identify the potential of fairness in allocations.
Abstract: In this paper, the impact of burstification delay on the TCP
traffic statistics is presented as well as a new assembly scheme that uses
flow window size as the threshold criterion. It is shown that short assembly
times are ideally suitable for sources with small congestion windows,
allowing for a speed up in their transmission. In addition, large assembly
times do not yield any throughput gain, despite the large number of
segments per burst transmitted, but result in a low throughput variation, and
thus a higher notion of fairness among the individual flows. To this end, in
this paper, we propose a new burst assembly scheme that dynamically
assigns flows to different assembly queues with different assembly timers,
based on their instant window size. Results show that the proposed scheme
with different timers provides a higher average throughput together with a
smaller variance which is a good compromise for bandwidth dimensioning.