Some Consequences of the Complexity of Intelligent Prediction
Abstract
What is the relationship between the complexity of a learner
and the randomness of his mistakes? This question was posed in [4] who showed that the more complex the learner the higher the possibility that his mistakes deviate from a true random sequence. In the current paper we report on an empirical investigation of this problem. We investigate two characteristics of randomness, the stochastic and algorithmic complexity of the binary sequence of mistakes. A learner with a Markov model of order k is trained on a finite binary sequence produced by a Markov source of order k* and is tested on a different random sequence. As a measure of learner’s complexity we define a quantity called the sysRatio, denoted by ρ, which is the ratio between the compressed and uncompressed lengths of the binary string whose ith bit represents the maximum a posteriori decision made at state i of the learner’s model. The quantity ρ is a measure of information density. The main result of the paper shows that this ratio is crucial in answering the above posed question. The result indicates that there is a critical threshold ρ* such that when ρ <=ρ* the sequence of mistakes possesses the following features: (1) low divergence Δ from a random sequence, (2) low variance in algorithmic complexity. When ρ > ρ*, the characteristics of the mistake sequence changes sharply towards a high Δ and high variance in algorithmic complexity. It is also shown that the quantity ρ is inversely proportional to k and the value of ρ* corresponds to the value k*. This is the point where the learner’s model becomes too simple and is unable to approximate the Bayes optimal decision. Here the characteristics of the mistake sequence change sharply.
and the randomness of his mistakes? This question was posed in [4] who showed that the more complex the learner the higher the possibility that his mistakes deviate from a true random sequence. In the current paper we report on an empirical investigation of this problem. We investigate two characteristics of randomness, the stochastic and algorithmic complexity of the binary sequence of mistakes. A learner with a Markov model of order k is trained on a finite binary sequence produced by a Markov source of order k* and is tested on a different random sequence. As a measure of learner’s complexity we define a quantity called the sysRatio, denoted by ρ, which is the ratio between the compressed and uncompressed lengths of the binary string whose ith bit represents the maximum a posteriori decision made at state i of the learner’s model. The quantity ρ is a measure of information density. The main result of the paper shows that this ratio is crucial in answering the above posed question. The result indicates that there is a critical threshold ρ* such that when ρ <=ρ* the sequence of mistakes possesses the following features: (1) low divergence Δ from a random sequence, (2) low variance in algorithmic complexity. When ρ > ρ*, the characteristics of the mistake sequence changes sharply towards a high Δ and high variance in algorithmic complexity. It is also shown that the quantity ρ is inversely proportional to k and the value of ρ* corresponds to the value k*. This is the point where the learner’s model becomes too simple and is unable to approximate the Bayes optimal decision. Here the characteristics of the mistake sequence change sharply.
Keywords
learning, sequence prediction, descriptive complexity