打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
Feature Extraction Methods: A Survey

Feature Extraction Methods: A Survey

Introduction

It is often difficult to learn a concept directly from the raw input data as much of the data might be irrelevant to the concept and the particular values of these irrelevant pieces of data can cause the learner to draw incorrect conclusions and make poor generalizations. In addition, the raw data might be of a form where the patterns defining the concept are well hidden. Thus, as a step prior to learning the concept, it is often a good idea to discover what part of the input data is useful, and now this part can be transformed so that the desired concept can be seen. Sometimes the original input data will be enlarged before the system attempts to learn to classify. This is the problem of feature extraction. The importance of this step in learning is seen in the fact that literally thousands of papers concerning feature extraction have been published in the last few years.

In this paper we will survey some of the recent literature to examine what methods are being explored.

Categorizing the work on feature extraction by particular methods is difficult. One reason for this is that in many cases more than one method is applied, such a performing a Gabor transform prior to finding features with a self organizing network.

Geometric Representation

In Handprinted Character Recognition Based on Spatial Topology Distance Measurement Liou and Yang tackle the problem of handwriting recognition where the strokes comprising the letters might be thick. Many algorithms for handwriting recognition will reduce the handwriting to a skeleton using thinning algorithms. This has the drawback that parts of the letters are often distorted this way, particularly in intersections, joints and ends of characters. In addition spurious pixels may be created. In this work, characters are represented by a fixed number of ellipses which fill each character. The same occurs with templates of each character and a distance measure is defined which allows the best match to be selected.

Question: In this paper the ellipses are represented as four dimensional vectors (x, y, r,

) where x and y represent the center of the ellipse, r is the radius of the major axis, and
is the orientation of the major axis. I do not understand how this is an ellipse as the radius of the minor axis can be anything less than or equal to r.

The character skeletons are found first, using the Voronoi method. The skeletons are regularly sampled to determine ellipse centers.

Fourier Transforms and Wavelets

Neural methods

Of course feature extraction is often used as a prior step to classification by neural network but sometimes the neural network is used to find the features. In Decision Boundary Feature Extraction for Neural Networks Lee and Landgrebe expand on their previous work, Feature Extraction Based on Decision Boundaries and propose a feature extraction method based on the decision boundaries found by a neural network. They define a Decision Boundary Feature Matrix (DBMF) as

DBMF = 1/K
S N(X)Nt(X)p(X)dX
where N(X) is the unit normal vector to the decision boundary at a point X on the decision boundary, p(X) is a probability density function, K =
S p(X)dX, and S is the decision boundary. The rank of this matrix is equal to the smallest dimension by which the same classification could be made as in the original space. Likewise, the eigenvectors of this matrix are features vectors which provide the same classification as the original space.

 

The steps in the algorithm are (assuming two classes):

  1. Train the neural network using all featuers.
  2. For each correctly classified example in
    1 find the closest correctly classified example in
    2
  3. As the line connecting these points must pass through the decision boundary follow along this line until a point sufficiently close to the boundary is found.
  4. Estimate the vector normal to the decision boundary.
  5. Estimate the decision boundary feature matrix using the normal vectors found.
  6. Determine the eigenvectors of this matrix.
  7. Train a new neural network using the new feature set.

 

Unfortunately this method does not have the advantage of reducing the dimension of the problem prior to learning the concept but might serve to reduce computational time if the final trained system is to be used frequently.

Adding dimensions

A case of using feature extraction to increase the dimensionality of the input is seen in Unconstrained Handwritten Numeral Recognition Based on Radial Basis Competitive Networks with Spatio-Temporal Feature Representation where Lee and Pan take written handwriting and add a temporal dimension.

Genetic Algorithms

In Genetic Synthesis of Unsupervised Learning Algorithms we see an attempt to find alternatives to Kohonen‘s algorithm in constructing Self-Organizing Maps. The authors propose several different genetic encodings which are generalizations of the Kohonen algorithm.

Fuzzy Methods

Fractals

Principal Component Analysis

Self Organizing Systems

A modification of Kohonen‘s original self-organizing feature map (SOFM) was proposed by Ritter and Kohonen in Self-Organizing Semantic Maps (SOSM). In the latter case the input data is augmented with class labels. In A Note on Self-Organizing Semantic Maps Bezdek and Pal argue against using this method, showing that the methods of SOFM, principal components and Sammon‘s algorithm (A Nonlinear Mapping for Data Structure Analysis, An Optimal Set of Discriminant Vectors) all produce the same results as SOSM, which is more complicated.

The same authors, in An Index of Topological Preservation for Feature Extraction propose a their own modification of Kohonen‘s original algorithm, and compare it with principal component analysis and Sammon‘s algorithm. The three algorithms are compared with respect to a property called metric topology preservation, which attempts to maintain pairwise ordering of distance. Of course Sammon mapping attempts to preserve all distances, if it succeeds with this it will also succeed with MTP. The authors show that Sammon‘s algorithm and PCA perform better with respect to MTP than the modified SOFM.

Self Organization was applied to classification in A Neural Clustering Approach for High Resolution Target Classification. The task was to classify five different vehicles based on HRR radar scans. A Self-Organizing Map was created as a first step. The vectors thus obtained were refined using the Learning Vector Quantization Algorithm. Classification of vehicles was then done using minimum Euclidean distance. Up to 97% classification accuracy was achieved, but it must be pointed out that this is only on the training data. No figures are presented for a separate test set.

 

A very similar approach was taken in Neural Network Based Cloud Classifier though this paper provides little detail, and only reports that "performance of the classifier is relatively good".

Object Recognition

Without prior feature extraction, training a neural network to recognize objects can require very large amounts of labelled training data. In Distortion Tolerant Pattern Recognition Based on Self-Organizing Feature Extraction Lampinen and Oja propose a method of self-organizing feature extraction, which also requires substantial data, but does not require the data to be labelled. Often the data might be plentiful but manual labelling of it becomes very costly. Thus, in the next stage, the dimension of the problem has been reduced (using the extracted features), so a smaller amount of labelled data is required.

The system consists of three layers. The first layer performs Gabor transformations on the image. The second layer is a multilayered self organizing network (MSOM) which clusters the Gabor coefficients. The third layer is a supervised neural network which does the actual classification.

Character Recognition

Bibliography

James C. Bezdek and Nikhil R. Pal. 1995.
A Note on Self-Organizing Semantic Maps.
IEEE Transactions on Neural Networks 6, pp. 1029-1036.

James C. Bezdek and Nikhil R. Pal. 1995.
An Index of Topological Preservation for Feature Extraction.
Pattern Recognition 28. pp. 381-391.

Y.Y. Cai, A.Y.C. Nee, and H.T. Loh. 1996.
Geometric Feature Detection for Reverse Engineering Using Range Imaging.
Journal of Visual Communication and Image Representation 7, pp. 205-216.

Zheru Chi and Hong Yan. 1995.
Handwritten Numeral Recognition Using a Small Number of Fuzzy Rules with Optimized Defuzzification Parameters.
Neural Networks 8, pp. 821-827.

Randall S. Collica, Jill P. Card, and William Martin. 1995.
SRAM Bitmap Shape Recognition and Sorting Using Neural Networks.
IEEE Transactions on Semiconductor Manufacturing 8, pp. 326-332.

Jesus M. Cruz, Gonzalo Pajares, and Joaquin Aranda. 1995.
A Neural Network Model in Stereovision Matchin.
Neural Networks 8, pp. 805-813.

Dwight D. Day and Debi Rogers. 1996.
Fourier-Based Texture Measures with Application to the Analysis of the Cell Structure of Baked Products.
Digital Signal Processing 6, pp. 138-144.

Ali Dasdan and Kemal Oflazer. 1993.
Genetic Synthsis of Unsupervised Learning Algorithms.

D.H. Foley and J.W. Sammon. 1978.
An Optimal Set of Discriminant Vectors.
IEEE Transactions on Computing 24 pp. 271-278.

Ashish Ghosh, Nikhil R. Pal, and Sankar K. Pal. 1995.
Modeling of Component Failure In Neural Networks for Robustness Evaluation: An Application to Object Extraction.
IEEE Transactions on Neural Networks 6, pp. 648-656.

Shigekazu Ishihara, Keiko Ishihara, Mitsuo Nagamachi, Yukihiro Matsubara. 1995.
An Automated Builder for a Kansei Engineering Expert System Using Self-organizing Neural Networks.
International Journal of Industrial Ergonomics 15, pp. 13-24.

W.M. Krueger, S.D. Jost, and K. Rossi. 1996.
On Synthesizing Discrete Fractional Brownian Motion with Applications to Image Processing.
Graphical Models and Image Processing 58, pp. 334-344.

Jouko Lampinen and Erkki Oja. 1995.
Distortion Tolerant Pattern Recognition Based on Self-Organizing Feature Extraction.
IEEE Transactions on Neural Networks 6, pp. 539-547.

Steve Lawrence, C. Lee Giles, Ah Chung Tsoi, and Andrew D. Back. 1996.
Face Recognition: A Hybrid Neural Network Approach

Steve Lawrence, C. Lee Giles, Ah Chung Tsoi, and Andrew D. Back. 1997.
Face Recognition: A Convolutional Neural-Network Approach.
IEEE Transactions on Neural Networks 8, pp. 98-113.

Seong-Whan Lee. 1996.
Off-Line Recognition of Totally Unconstrained Handwritten Numerals Using Multilayer Cluster Neural Networks.
IEEE Transactions on Pattern Analysis and Machine Intelligence 18, pp. 648-652.

Chulhee Lee and David A. Landgrebe. 1997.
Decision Boundary Feature Extraction for Neural Networks.
IEEE Transactions on Neural Networks 8, pp. 75-83.

Chulhee Lee and David A. Landgrebe. 1993.
Feature Extraction Based on Decision Boundaries.
IEEE Transactions on Pattern Analysis and Machine Intelligence 15, pp 388-400.

Sukhan Lee and Jack Chien-Jan Pan. 1996.
Unconstrained Handwritten Numeral Recognition Based on Radial Basis Competitive Networks with Spatio-Temporal Feature Representation.
IEEE Transactions on Neural Networks 7, pp. 455-474.

S. Li and M.A. Elbestawi. 1996.
Fuzzy Clustering for Automated Tool Condition Monitoring in Machining.
Mechanical Systems and Signal Processing 10, pp. 533-550.

Cheng-Yuan Liou and Hsin-Chang Yang . 1996.
Handprinted Character Recognition Based on Spatial Topology Distance Measurement
IEEE Transactions on Pattern Analysis and Machine Intelligence 18, pp. 941-945.

T.I. Liu, J.H. Singonahalli, and N.R. Iyer. 1996.
Detection of Roller Bearing Defects Using Expert System and Fuzzy Logic.
Mechanical Systems and Signal Processing 10, pp. 595-614.

Jianchang Mao and Anil K. Jain. 1995.
Articifial Neural Networks for Feature Extraction and Multivariate Data Projection.
IEEE Transactions on Neural Networks 6, pp. 296-317.

Kenji Okajima. 1996.
A Model Visual Cortex Incorporating Intrinsic Horizontal Neuronal Connections.
Neural Networks 9, pp. 211-222.

Constatinos S. Pattichis, Christos N. Schizas, and Lefkos T. Middleton. 1995.
Neural Network Models in EMG Diagnosis IEEE Transactions on Biomedical Engineering
42, pp. 486-496.

Renzo Perfetti and Emanuele Massarelli. 1997.
Training Spatially Homogeneous Fully Recurrent Neural Networks in Eigenvalue Space.
Neural Networks 10, pp. 125-137.

P.P. Raghu, R. Poongodi, and B. Yegnanarayana. 1995.
A Combined Neural Network Approach for Texture Classification.
Neural Networks 8, pp. 975-987.

H. Ritter and T. Kohonen. 1989.
Self-Organizing Semantic Maps.
Biol. Cybern. 61, pp. 241-254.

Thorsteinn Rognvaldsson. 1993.
Pattern Discrimination Using Feed-Forward Networks - a Benchmark Study of Scaling Behavior.
Neural Computation 5, p. 483.

J.W. Sammon. 1969.
A Nonlinear Mapping for Data Structure Analysis.
IEEE Transactions on Computing 18, pp. 401-409.

Udo Seiffert and Bernd Michaelis. 1995.
Three-dimensional Self-Organizing Maps for classifcation of image properties .
Second New Zealand Two Stream Conference on Artifical Neural Networks and Expert Systems

Clayton Stewart, Yi-Chuan Lu, and Victor Larson. 1994.
A Neural Clustering Approach for High Resolution Radar Target Classification.
Pattern Recognition 27, pp. 503-513.

Jayaram K. Udupa and Supun Samarasekera. 1996.
Fuzzy Connectedness and Object Definition: Theory, Algorithms, and Applications in Image Segmentation.
Graphical Models and Image Processing 58, pp. 246-261.

Akio Utsugi. 1997.
Hyperparameter Selection for Self-Organizing Maps abstract.
Neural Computation 9, pp. 623-625.

Akio Utusgi. 1996.
Topology selection for self-organizing maps abstract.
Neural Systems 7, pp. 727-740.

Ari Visa, Jukka Iivarinen, Kimmo Valkealahti, and Olli Simula. 1995.
Neural Network Based Cloud Classifier (167KB) abstract.
International Conference on Artificial Neural Networks.

Ya Wu and R. Du. 1996.
Feature Extraction and Assessment Using Wavelet Packets for Monitoring of Machine Processes.
Mechanical Systems and Signal Processing 10, pp. 29-53.

本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
Ostrichinator by coxlab
带你读论文 | 端到端语音识别模型
ML/DL之Paper:机器学习、深度学习常用的国内/国外引用(References)参考文献集合(建议收藏,持续更新)
收藏 ‖ 神经网络架构图(附Paper)
Part 3.2 第一段
【每周CV论文推荐】 掌握残差网络必读的10多篇文章
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服