Class
**Author**: Peter Turney (peter@ai.iit.nrc.ca)
**Source**: [UCI](https://archive.ics.uci.edu/ml/machine-learning-databases/undocumented/connectionist-bench/vowel/) - date unknown
**Please cite**: [UCI citation policy](https://archive.ics.uci.edu/ml/citation_policy.html)
**Vowel Recognition (Deterding data)**
Speaker independent recognition of the eleven steady state vowels of British English using a specified training set of lpc derived log area ratios.
Collected by David Deterding (data and non-connectionist analysis), Mahesan Niranjan (first connectionist analysis), Tony Robinson (description, program, data, and results)
A very comprehensive description including comments by the authors can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/undocumented/connectionist-bench/vowel/vowel.names)
The problem is specified by the accompanying data file, "vowel.data". This
consists of a three dimensional array: voweldata [speaker, vowel, input].
The speakers are indexed by integers 0-89. (Actually, there are fifteen
individual speakers, each saying each vowel six times.) The vowels are
indexed by integers 0-10. For each utterance, there are ten floating-point
input values, with array indices 0-9.
The problem is to train the network as well as possible using only on data
from "speakers" 0-47, and then to test the network on speakers 48-89,
reporting the number of correct classifications in the test set.
For a more detailed explanation of the problem, see the excerpt from Tony
Robinson's Ph.D. thesis in the COMMENTS section. In Robinson's opinion,
connectionist problems fall into two classes, the possible and the
impossible. He is interested in the latter, by which he means problems
that have no exact solution. Thus the problem here is not to see how fast
a network can be trained (although this is important), but to maximise a
less than perfect performance.
#### METHODOLOGY
Report the number of test vowels classified correctly, (i.e. the number of
occurences when distance of the correct output to the actual output was the
smallest of the set of distances from the actual output to all possible
target outputs).
Though this is not the focus of Robinson's study, it would also be useful
to report how long the training took (measured in pattern presentations or
with a rough count of floating-point operations required) and what level of
success was achieved on the training and testing data after various amounts
of training. Of course, the network topology and algorithm used should be
precisely described as well.
#### VARIATIONS
This benchmark is proposed to encourage the exploration of different node
types. Please theorise/experiment/hack. The author (Robinson) will try to
correspond by email if requested. In particular there has been some
discussion recently on the use of a cross-entropy distance measure, and it
would be interesting to see results for that.
#### Notes
1. Each of these numbers is based on a single trial with random starting
weights. More trials would of course be preferable, but the computational
facilities available to Robinson were limited.
2. Graphs are given in Robinson's thesis showing test-set performance vs.
epoch count for some of the training runs. In most cases, performance
peaks at around 250 correct, after which performance decays to different
degrees. The numbers given above are final performance figures after about
3000 trials, not the peak performance obtained during the run.
#### REFERENCES
[Deterding89] D. H. Deterding, 1989, University of Cambridge, "Speaker
Normalisation for Automatic Speech Recognition", submitted for PhD.
[NiranjanFallside88] M. Niranjan and F. Fallside, 1988, Cambridge University
Engineering Department, "Neural Networks and Radial Basis Functions in
Classifying Static Speech Patterns", CUED/F-INFENG/TR.22.
[RenalsRohwer89-ijcnn] Steve Renals and Richard Rohwer, "Phoneme
Classification Experiments Using Radial Basis Functions", Submitted to
the International Joint Conference on Neural Networks, Washington,
1989.
[RabinerSchafer78] L. R. Rabiner and R. W. Schafer, Englewood Cliffs, New
Jersey, 1978, Prentice Hall, "Digital Processing of Speech Signals".
[PragerFallside88] R. W. Prager and F. Fallside, 1988, Cambridge University
Engineering Department, "The Modified Kanerva Model for Automatic
Speech Recognition", CUED/F-INFENG/TR.6.
[BroomheadLowe88] D. Broomhead and D. Lowe, 1988, Royal Signals and Radar
Establishment, Malvern, "Multi-variable Interpolation and Adaptive
Networks", RSRE memo, #4148.
[RobinsonNiranjanFallside88-tr] A. J. Robinson and M. Niranjan and F.
Fallside, 1988, Cambridge University Engineering Department,
"Generalising the Nodes of the Error Propagation Network",
CUED/F-INFENG/TR.25.
[Robinson89] A. J. Robinson, 1989, Cambridge University Engineering
Department, "Dynamic Error Propagation Networks".
[McCullochAinsworth88] N. McCulloch and W. A. Ainsworth, Proceedings of
Speech'88, Edinburgh, 1988, "Speaker Independent Vowel Recognition
using a Multi-Layer Perceptron".
[RobinsonFallside88-neuro] A. J. Robinson and F. Fallside, 1988, Proceedings
of nEuro'88, Paris, June, "A Dynamic Connectionist Model for Phoneme
Recognition.
#### Notes
* This is version 2. Version 1 is hidden because it includes a feature dividing the data in train and test set. In OpenML this information is explicitly available in the corresponding task.
13
vowel
vowel
ARFF
2
active
Public
2014-01-22T15:57:30Z
public
0
2014-01-22T15:57:30Z
2
20824
https://www.openml.org/data/download/52210/phpd8EoD9