With the onset of easy access to supercomputers with high amounts of memory available, machine learning algorithms have continued to increase the resources necessary to perform their data analysis. This paper aims to show development in the other direction, by showing that through the use of a combination of feature bagging and ensembles of Extreme Learning Machines (ELMs) it is possible to leverage machine learning, without loss of accuracy, on devices where Flash memory is very scarce, and Random-access memory (RAM) is even scarcer, such as on embedded systems. This novel strategy is called Feature Bagged Extreme Learning Machines (FB-ELMs).
This paper presents a novel methodology for time series prediction. It is based on Extreme Learning Machines and an adaptive ensemble techniques. It is tested successfully on the CIF 2016 competition datasets which are composed of 72 time series in total. Among those, 48 time series are artificial with each having 108 training data points and 12 testing points. So for each artificial time series, there are 120 values, which is more than that of the rest 24 real time series.
This paper presents a novel dimensionality reduction technique based on ELM and SOM: ELM-SOM+. This technique preserves the intrinsic quality of Self-Organizing Map (SOM): it is nonlinear and suitable for big data. It also brings continuity to the projection using two Extreme Learning Machine (ELM) models, the first one to perform the dimensionality reduction and the second one to perform the reconstruction. ELM-SOM+ is tested successfully on nine diverse datasets. Regarding reconstruction error, the new methodology shows considerable improvement over SOM and brings continuity.
This site was built with Mobirise website themes