Abstract
In this paper a novel method to determine stochastic mean-square (m-s) stability of sensor-supervisory node loops of a distributed sensor network (DSN) is presented. It enables one to explicitly account for the transition characteristics of time-varying delays that are encountered in communication networks via Markov process models. This results in less conservative controller parameters. For stationary delay processes, it is shown that the underlying dynamics can be described by a time-invariant model thus providing a significant computational advantage. Non-stationary Markov delay processes can be studied via methods available for deterministic time-varying discrete-time systems. The proposed method allows for variable 'granularity' in the controller design thus providing a seamless transition between accuracy and computational effort.