- Grąbczewski K,
Meta-Learning in Decision Tree Induction, Springer 2014.
-
Jankowski N, Duch W, Grąbczewski K,
Meta-learning in Computational Intelligence.
| Show abstract
Computational Intelligence (CI) community has developed hundreds of algorithms for intelligent data analysis, but still many hard problems in computer vision, signal processing or text and multimedia understanding, problems that require deep learning techniques, are open.
Modern data mining packages contain numerous modules for data acquisition, pre-processing, feature selection and construction, instance selection, classification, association and approximation methods, optimization techniques, pattern discovery, clusterization, visualization and post-processing. A large data mining package allows for billions of ways in which these modules can be combined. No human expert can claim to explore and understand all possibilities in the knowledge discovery process.
This is where algorithms that learn how to learnl come to rescue.
Operating in the space of all available data transformations and optimization techniques these algorithms use meta-knowledge about learning processes automatically extracted from experience of solving diverse problems. Inferences about transformations useful in different contexts help to construct learning algorithms that can uncover various aspects of knowledge hidden in the data. Meta-learning shifts the focus of the whole CI field from individual learning algorithms to the higher level of learning how to learn.
This book defines and reveals new theoretical and practical trends in meta-learning, inspiring the readers to further research in this exciting field.
Studies in Computational Intelligence, Vol. 358, 1st Edition, pp. X + 362. 127 illus, 76 in color, Springer 2011.
-
Jankowski N, Grąbczewski K,
Universal Meta-Learning Architecture and Algorithms
| Show abstract
There are hundreds of algorithms within data mining. Some of them are used to transform
data, some to build classifiers, others for prediction, etc. Nobody knows well all these
algorithms and nobody can know all the arcana of their behavior in all possible applications.
How to find the best combination of transformation and final machine which solves given
problem?
The solution is to use configurable and efficient meta-learning to solve data mining
problems. Below, a general and flexible meta-learning system is presented. It can be used
to solve different problems with computational intelligence, basing on learning from data.
The main ideas of our meta-learning algorithms lie in complexity controlled loop,
searching for most adequate models and in using special functional specification of search
spaces (the meta-learning spaces) combined with flexible way of defining the goal of metasearching.
In: Meta-Learning in Computational Intelligence, Editors: Jankowski N, Duch W, Grąbczewski K, Studies in Computational Intelligence, Springer Berlin / Heidelberg, 2011, Vol. 358, pp. 1-76.
-
Duch W, Maszczyk T, Grochowski M,
Optimal Support Features for Meta-Learning.
| Show abstract
Meta-learning has many aspects, but its final goal is to discover in an automatic
way many interesting models for a given data. Our early attempts in this area involved
heterogeneous learning systems combined with a complexity-guided search
for optimal models, performed within the framework of (dis)similarity based methods
to discover “knowledge granules”. This approach, inspired by neurocognitive
mechanisms of information processing in the brain, is generalized here to learning
based on parallel chains of transformations that extract useful information granules
and use it as additional features. Various types of transformations that generate hidden
features are analyzed and methods to generate them are discussed. They include
restricted random projections, optimization of these features using projection pursuit
methods, similarity-based and general kernel-based features, conditionally defined
features, features derived from partial successes of various learning algorithms, and
using the whole learning models as new features. In the enhanced feature space the
goal of learning is to create image of the input data that can be directly handled
by relatively simple decision processes. The focus is on hierarchical methods for
generation of information, starting from new support features that are discovered
by different types of data models created on similar tasks and successively building
more complex features on the enhanced feature spaces. Resulting algorithms
facilitate deep learning, and also enable understanding of structures present in the
data by visualization of the results of data transformations and by creating logical,
fuzzy and prototype-based rules based on new features. Relations to various
machine-learning approaches, comparison of results, and neurocognitive inspirations
for meta-learning are discussed.
Book chapter, in: Meta-learning in Computational Intelligence. Studies in Computational Intelligence. Eds: N. Jankowski, K. Grabczewski, W. Duch, Springer 2011, pp. 317-358.
-
Jankowski N.
Meta-uczenie w inteligencji obliczeniowej (Meta-learning in computational intelligence).
Warszawa, Polska: Akademicka Oficyna Wydawnicza EXIT, 2011, 396 pages (hopfully soon it will be in English).
- Duch W,
Towards comprehensive foundations of computational intelligence.
| Show abstract
Although computational intelligence (CI) covers a vast variety of different methods it still lacks an integrative theory. Several proposals for CI foundations are discussed: computing and cognition as compression, meta-learning as search in the space of data models, (dis)similarity based methods providing a framework for such meta-learning, and a more general approach based on chains of transformations. Many useful transformations that extract information from features are discussed. Heterogeneous adaptive systems are presented as particular example of transformation-based systems, and the goal of learning is redefined to facilitate creation of simpler data models. The need to understand data structures leads to techniques for logical and prototype-based rule extraction, and to generation of multiple alternative models, while the need to increase predictive power of adaptive models leads to committees of competent models. Learning from partial observations is a natural extension towards reasoning based on perceptions, and an approach to intuitive solving of such problems is presented. Throughout the paper neurocognitive inspirations are frequently used and are especially important in modeling of the higher cognitive functions. Promising directions such as liquid and laminar computing are identified and many open problems presented.
In: W. Duch and J. Mandziuk, Challenges for Computational Intelligence.
Springer Studies in Computational Intelligence, Vol. 63, 261-316, 2007.