Abstract: We study the problem of data propagation in sensor networks,
comprised of a large number of very small and low-cost nodes,
capable of sensing, communicating and computing. The distributed
co-operation of such nodes may lead to the accomplishment of large
sensing tasks, having useful applications in practice. We present
a new protocol for data propagation towards a control center
(``sink") that avoids flooding by probabilistically favoring
certain (``close to optimal") data transmissions.
This protocol is very simple to implement in sensor devices
and operates under total absence
of co-ordination between sensors. We consider a network model of randomly deployed sensors of sufficient density.
As shown by a geometry analysis,
the protocol is correct, since it always propagates data
to the sink, under ideal network conditions (no failures). Using
stochastic processes, we show that the protocol is very energy efficient. Also, when part of the network is inoperative, the
protocol manages to propagate data very close to the sink, thus in
this sense it is robust. We finally present and discuss
large-scale experimental findings validating the analytical
results.
Abstract: We study the problem of data propagation in sensor networks,
comprised of a large number of very small and low-cost nodes,
capable of sensing, communicating and computing. The distributed
co-operation of such nodes may lead to the accomplishment of large
sensing tasks, having useful applications in practice. We present a new protocol for data propagation towards a control center ("sink") that avoids flooding by probabilistically favoring certain ("close to optimal") data transmissions. Motivated by certain applications and also as a starting point for a rigorous analysis, we study here lattice-shaped sensor networks. We however show that this lattice shape emerges even in randomly deployed sensor networks of sufficient sensor density. Our work is inspired and builds upon the directed diffusion paradigm.
This protocol is very simple to implement in sensor devices, uses only local information and operates under total absence of co-ordination between sensors. We consider a network model of randomly deployed sensors of sufficient density. As shown by a geometry analysis, the protocol is correct, since it always propagates data to the sink, under ideal network conditions (no failures). Using stochastic processes, we show that the protocol is very energy efficient. Also, when part of the network is inoperative, the protocol manages to propagate data very close to the sink, thus in this sense it is robust. We finally present and discuss large-scale experimental findings validating the analytical results.
Abstract: Smart Dust is a set of a vast number of ultra-small fully
autonomous computing and communication devices, with very
restricted energy and computing capabilities, that co-operate to
quickly and efficiently accomplish a large sensing task. Smart
Dust can be very useful in practice i.e. in the local detection of
a remote crucial event and the propagation of data reporting its
realization. In this work we continue (see [POMC02]) our
effort towards the research on smart dust from a basic algorithmic
point of view. Under a simple but realistic model for smart dust
we present an interesting problem, which is how to propagate
efficiently information on an event detected locally. Then we
present a new smart dust protocol, which we call the
``Sleep-Awake" protocol, for information propagation that explicitly uses the energy saving features (i.e. the alteration of sleeping and awake time periods) of the smart dust particles. By using both probabilistic some first analysis and extensive
experiments, we provide some first concrete results for the
success probability and the time and energy efficiency of the
protocol, in terms of parameters of the smart dust network. We
note that the study of the interplay of these parameters allows us
to program the smart dust network characteristics accordingly.
Abstract: The existence of good probabilistic models for the job
arrival process and the delay components introduced at
the different stages of job processing in a Grid
environment is important for the improved understanding
of the computing concept envisioned by the Grid. In this
study, we present a thorough analysis of the job arrival
process in the EGEE infrastructure and the time
durations a job spends at different states in the EGEE
environment. We define four delay components of the
total job delay and model each component separately.
We observe that the job inter-arrival times at the Grid
level can be adequately modeled by a rounded
exponential distribution, while the total job delay (from
the time it is generated until the time it completes
execution) is dominated by the Computing Element’s
queuing and Worker Node’s execution times.
Abstract: We investigate the problem of how to achieve energy balanced data propagation in distributed wireless sensor networks. The energy balance property guarantees that the average per sensor energy dissipation is the same for all sensors in the network, throughout the execution of the data propagation protocol. This property is crucial for prolonging the network lifetime, by avoiding early energy depletion of sensors.
We survey representative solutions from the state of the art. We first present a basic algorithm that in each step probabilistically decides whether to propagate data one-hop towards the final destination (the sink), or to send it directly to the sink. This randomized choice trades-off the (cheap, but slow) one-hop transmissions with the direct transmissions to the sink, which are more expensive but bypass the bottleneck region around the sink and propagate data fast. By a detailed analysis using properties of stochastic processes and recurrence relations we precisely estimate (even in closed form) the probability for each propagation option necessary for energy balance.
The fact (shown by our analysis) that direct (expensive) transmissions to the sink are needed only rarely, shows that our protocol, besides energy balanced, is also energy efficient. We then enhance this basic result by surveying some recent findings including a generalized algorithm and demonstrating the optimality of this two-way probabilistic data propagation, as well as providing formal proofs of the energy optimality of the energy balance property.
Abstract: We consider applications of probabilistic techniques in the
framework of algorithmic game theory. We focus on three distinct case
studies: (i) The exploitation of the probabilistic method to demonstrate
the existence of approximate Nash equilibria of logarithmic support sizes
in bimatrix games; (ii) the analysis of the statistical conflict that mixed
strategies cause in network congestion games; (iii) the effect of coalitions
in the quality of congestion games on parallel links.
Abstract: The existence of good probabilistic models for the job
arrival process and job characteristics is important for
the improved understanding of grid systems and the
prediction of their performance. In this study, we
present a thorough analysis of the job inter-arrival
times, the waiting times at the queues, the execution
times, and the data sizes exchanged at the
kallisto.hellasgrid.gr cluster, which is part of the
EGEE Grid infrastructure. By computing the Hurst
parameter of the inter-arrival times we find that the
job arrival process exhibits self-similarity/long-range
dependence. We also propose simple and intuitive
models for the job arrival process and the job
execution times. The models proposed were validated
and were found to be in very good agreement with our
empirical measurements.
Abstract: The existence of good probabilistic models
for the job arrival process and the delay components
introduced at different stages of job processing in a
Grid environment is important for the improved
understanding of the Grid computing concept. In this
study, we present a thorough analysis of the job
arrival process in the EGEE infrastructure and of the
time durations a job spends at different states in the
EGEE environment. We define four delay compo-
nents of the total job delay and model each compo-
nent separately. We observe that the job inter-arrival
times at the Grid level can be adequately modelled by
a rounded exponential distribution, while the total job
delay (from the time it is generated until the time it
completes execution) is dominated by the computing
element’s register and queuing times and the worker
node’s execution times. Further, we evaluate the
efficiency of the EGEE environment by comparing
the job total delay performance with that of a hypothetical ideal super-cluster and conclude that we
would obtain similar performance if we submitted the
same workload to a super-cluster of size equal to 34%
of the total average number of CPUs participating in
the EGEE infrastructure. We also analyze the job
inter-arrival times, the CE’s queuing times, the WN’s
execution times, and the data sizes exchanged at the
kallisto.hellasgrid.gr cluster, which is node in the
EGEE infrastructure. In contrast to the Grid level, we
find that at the cluster level the job arrival process
exhibits self-similarity/long-range dependence. Final-
ly, we propose simple and intuitive models for the job
arrival process and the execution times at the cluster
level.
Abstract: In this paper we solve a particular instance of a class of differential equations that appears not to be
widely documented in the literature and results from the theoretical analysis of a probabilistic key agreement protocol parameterized by an integer k > 1. This class consists of differential equations of the form dy(t) dt = Pk+1 i=0 fi(t)y(t)i, with k > 1 and fi(t); 0 · i · k + 1 specific real functions of t. This form appears to generalize the class of Abel differential equations of the first kind in that it involves the sought function to powers greater than
3, for k > 2. For k > 2 the class of differential equations is not amenable to the general solution methodology of the Abel class of differential equations. To the best of our knowledge, there are no previous efforts to tackle, analytically, differential equations of this form. In this paper we focus on the case k = 3. We show that the solution to the resulting differential equation may be written through the use of a class of Generalized Hyper-
Lambert functions. These functions generalize the well known LambertW function, which frequently arises in the analysis of engineering applications. The solution to the differential equation is achieved, interestingly, through a reduction to a particular real root of the polynomial of degree four that appears in the right hand-side of the differential equation. This technique can be of independent interest and it can be, more generally, applicable to other differential equations of a similar form not amenable to known analytic techniques.