Abstract: Selection of feature subset is an efficacious way for dimensionality contraction, elimination of inappropriate data ascending learning accurateness and improving result unambiguousness. Numerous feature subset selection design have been planned and studied for machine learning applications. Feature subgroup selection can be analyzed as the process of admit and eliminating as many improper and redundant features as bright since inappropriate features do not put in to the anticipating accurateness and superfluous characteristics do not react to getting an enhanced predictor for that they make accessible mainly instruction which is by now present in earlier feature. We build up a novel design that can capably and effortlessly deal with both incorrect and superfluous characteristics and get hold of a superior character subset. Based on the minimum spanning tree method, we confirm a FAST algorithm. The algorithm is a two steps growth in which, characteristics are branched into clusters by means of using graph-theoretic clustering means. In the subsequent step, the mainly used representative feature that is robustly affiliated to target classes is peculiar from each cluster to complex the final subset of features. Features in corrected clusters are comparatively sovereign; the clustering-based Scheme of FAST has a high hazard of producing a subset of practical and independent characteristics. In our projected FAST algorithm, it entails the domicile of the minimum spanning tree from a subjective broad graph; the divorce of the minimum spanning tree into a forest by means of every tree bespeak a cluster and the collection of representative appearance from the clusters.
T. Divya and B. Vijaya Babu, 2016. An Efficient Feature Subset Algorithm for High Dimensional Data. Asian Journal of Information Technology, 15: 3730-3733.