QUdata logo

Software:
Business&Games
Calculator

Download Analyzer
Mail Navigator


Online:
Stat Calc
Y(X) Finder
2D Plotter
3D Plotter
Science Calc


Library:
Data Mining
Statistics
Machine Learning
Genetic Algorithms
Neuronets
Fuzzy logic
Forecast


Links


Contacts



Neural Networks

  1. Neural Network Model for Apparent Deterministic Chaos in Spontaneously Bursting Hippocampal Slices

    Authors: B. Biswal, C. Dasgupta
    Comments: 4 pages, 5 .eps figures (included), requires psfrag.sty (included)
    Subj-class: Disordered Systems and Neural Networks; Biological Physics; Chaotic Dynamics
    Journal-ref: Phys. Rev. Lett. 88, 088102 (2002)

    A neural network model that exhibits stochastic population bursting is studied by simulation. First return maps of inter-burst intervals exhibit recurrent unstable periodic orbit (UPO)-like trajectories similar to those found in experiments on hippocampal slices. Applications of various control methods and surrogate analysis for UPO-detection also yield results similar to those of experiments. Our results question the interpretation of the experimental data as evidence for deterministic chaos and suggest caution in the use of UPO-based methods for detecting determinism in time-series data.

  2. Some Exact Results of Hopfield Neural Networks and Applications

    Authors: Hong-Liang Lu, Xi-Jun Qiu
    Comments: 4 pages, latex, no figures
    Subj-class: Biological Physics

    A set of fixed points of the Hopfield type neural network was under investigation. Its connection matrix is constructed with regard to the Hebb rule from a highly symmetric set of the memorized patterns. Depending on the external parameter the analytic description of the fixed points set had been obtained. And as a conclusion, some exact results of Hopfield neural networks were gained.

  3. Artificial Neural Network Modeling of Forest Tree Growth

    Authors: Christopher Gordon
    Comments: 86 pages, 19 figures, submitted as a MSc research report to the University of the Witwatersrand, South Africa
    Subj-class: Data Analysis, Statistics and Probability

    The problem of modeling forest tree growth curves with an artificial neural network (NN) is examined. The NN parametric form is shown to be a suitable model if each forest tree plot is assumed to consist of several differently growing sub-plots. The predictive Bayesian approach is used in estimating the NN output. Data from the correlated curve trend (CCT) experiments are used. The NN predictions are compared with those of one of the best parametric solutions, the Schnute model. Analysis of variance (ANOVA) methods are used to evaluate whether any observed differences are statistically significant. From a Frequentist perspective the differences between the Schnute and NN approach are found not to be significant. However, a Bayesian ANOVA indicates that there is a 93% probability of the NN approach producing better predictions on average.

  4. Neural Network Methods for Boundary Value Problems Defined in Arbitrarily Shaped Domains

    Authors: I. E. Lagaris, A. Likas, D. G. Papageorgiou
    Report-no: Preprint no. 7-98, Dept. of Computer Science, Univ. of Ioannina, Greece, 1998
    Subj-class: Neural and Evolutionary Computing; Numerical Analysis; Mathematical Physics; Disordered Systems and Neural Networks; Computational Physics
    ACM-class: C.1.3

    Partial differential equations (PDEs) with Dirichlet boundary conditions defined on boundaries with simple geometry have been successfully treated using sigmoidal multilayer perceptrons in previous works. This article deals with the case of complex boundary geometry, where the boundary is determined by a number of points that belong to it and are closely located, so as to offer a reasonable representation. Two networks are employed: a multilayer perceptron and a radial basis function network. The later is used to account for the satisfaction of the boundary conditions. The method has been successfully tested on two-dimensional and three-dimensional PDEs and has yielded accurate solutions.

  5. On the determination of probability density functions by using Neural Networks

    Authors: Lluis Garrido, Aurelio Juste
    Comments: 13 pages including 3 eps figures. Submitted to Comput. Phys. Commun
    Subj-class: Data Analysis, Statistics and Probability

    It is well known that the output of a Neural Network trained to disentangle between two classes has a probabilistic interpretation in terms of the a-posteriori Bayesian probability, provided that a unary representation is taken for the output patterns. This fact is used to make Neural Networks approximate probability density functions from examples in an unbinned way, giving a better performance than ``standard binned procedures''. In addition, the mapped p.d.f. has an analytical expression.

  6. Threshold Noise as a Source of Volatility in Random Synchronous Asymmetric Neural Networks

    Authors: Henrik Bohr, Patrick McGuire, Chris Pershing, Johann Rafelski (University of Arizona)
    Comments: 17 pages, 11 figures, submitted to Neural Computation
    Subj-class: Disordered Systems and Neural Networks; Biological Physics; Adaptation and Self-Organizing Systems

    We study the diversity of complex spatio-temporal patterns of random synchronous asymmetric neural networks (RSANNs). Specifically, we investigate the impact of noisy thresholds on network performance and find that there is a narrow and interesting region of noise parameters where RSANNs display specific features of behavior desired for rapidly `thinking' systems: accessibility to a large set of distinct, complex patterns.

  7. Characteristic functions and process identification by neural networks

    Authors: Joaquim A. Dente, R. Vilela Mendes (Laboratorio de Mecatronica, DEEC, IST, Lisboa, Portugal)
    Comments: 11 pages Latex, 12 figures in a combined ps-file
    Subj-class: Data Analysis, Statistics and Probability
    Journal-ref: Neural Networks,10 (1997) 1465-1471

    Principal component analysis (PCA) algorithms use neural networks to extract the eigenvectors of the correlation matrix from the data. However, if the process is non-Gaussian, PCA algorithms or their higher order generalisations provide only incomplete or misleading information on the statistical properties of the data. To handle such situations we propose neural network algorithms, with an hybrid (supervised and unsupervised) learning scheme, which constructs the characteristic function of the probability distribution and the transition functions of the stochastic process. Illustrative examples are presented, which include Cauchy and Levy-type processes

  8. Artificial Neural Network Methods in Quantum Mechanics

    Authors: I. E. Lagaris, A. Likas, D. I. Fotiadis
    Comments: Latex file, 29pages, 11 psfigs, submitted in CPC
    Subj-class: Quantum Physics; Computational Physics; Cellular Automata and Lattice Gases
    Journal-ref: Comput.Phys.Commun. 104 (1997) 1-14

    In a previous article we have shown how one can employ Artificial Neural Networks (ANNs) in order to solve non-homogeneous ordinary and partial differential equations. In the present work we consider the solution of eigenvalue problems for differential and integrodifferential operators, using ANNs. We start by considering the Schr\"odinger equation for the Morse potential that has an analytically known solution, to test the accuracy of the method. We then proceed with the Schr\"odinger and the Dirac equations for a muonic atom, as well as with a non-local Schr\"odinger integrodifferential equation that models the $n+\alpha$ system in the framework of the resonating group method. In two dimensions we consider the well studied Henon-Heiles Hamiltonian and in three dimensions the model problem of three coupled anharmonic oscillators. The method in all of the treated cases proved to be highly accurate, robust and efficient. Hence it is a promising tool for tackling problems of higher complexity and dimensionality.

  9. Artificial Neural Networks for Solving Ordinary and Partial Differential Equations

    Authors: I. E. Lagaris, A. Likas, D. I. Fotiadis
    Comments: LAtex file, 26 pages, 21 figs, submitted to IEEE TNN
    Report-no: CS-UOI-GR 15-96
    Subj-class: Computational Physics; Cellular Automata and Lattice Gases

    We present a method to solve initial and boundary value problems using artificial neural networks. A trial solution of the differential equation is written as a sum of two parts. The first part satisfies the boundary (or initial) conditions and contains no adjustable parameters. The second part is constructed so as not to affect the boundary conditions. This part involves a feedforward neural network, containing adjustable parameters (the weights). Hence by construction the boundary conditions are satisfied and the network is trained to satisfy the differential equation. The applicability of this approach ranges from single ODE's, to systems of coupled ODE's and also to PDE's. In this article we illustrate the method by solving a variety of model problems and present comparisons with finite elements for several cases of partial differential equations.

  10. Exploratory Application Of Neural Networks To School Finance: Forecasting Educational Spending (1997)

    Author: Bruce D. Baker, Craig E. Richards
    ANNUAL MEETING OF THE AMERICAN EDUCATIONAL RESEARCH ASSOCIATION SAN DIEGO, CA APRIL 13, 1998

    This study provides a side by side comparison of linear regression methodologies used by the National Center for Education Statistics in preparing projections of educational spending, with relatively new, flexible, non-linear regression methods. These methods have come to be known as Neural Networks because they are designed to mimic the pattern learning processes of a simple brain. Neural Networks have been promoted for their predictive accuracy in both cross-sectional (Buchman et. al.; Odom, 1994; Worzala, Lenk and Silva, 1995) and time series analyses (Caudill, 1995b; Hansen and Nelson, 1997; McMenamin, 1997). Others have recently implicated Neural Networks for their inferential value in revealing nonlinearities in complex data (Liao, 1992). This study finds that Neural Networks provide comparable prediction accuracy to the NCES model. More importantly, however, the Neural Networks reveal theoretically sound, non-linear patterns overlooked by the simple linear approach.

  11. Noisy Time Series Prediction Using a Recurrent Neural Network and Grammatical Inference

    Authors: C. Lee Giles Steve Lawrence Ah Chung Tsoi
    Machine Learning, Volume 44, Number 1/2, July/August, pp. 161 183, 2001.

    Financial forecasting is an example of a signal processing problem which is challenging due to small sample sizes, high noise, non-stationarity, and non-linearity. Neural networks have been very successful in a number of signal processing applications. We discuss fundamental limitations and inherent difficulties when using neural networks for the processing of high noise, small sample size signals. We introduce a new intelligent signal processing method which addresses the difficulties. The method proposed uses conversion into a symbolic representation with a selforganizing map, and grammatical inference with recurrent neural networks. We apply the method to the prediction of daily foreign exchange rates, addressing difficulties with non-stationarity, overfitting, and unequal a priori class probabilities, and we find significant predictability in comprehensive experiments covering 5 different foreign exchange rates. The method correctly predicts the direction of change for the next day with an error rate of 47.1%. The error rate reduces to around 40% when rejecting examples where the system has low confidence in its prediction. We show that the symbolic representation aids the extraction of symbolic knowledge from the trained recurrent neural networks in the form of deterministic finite state automata. These automata explain the operation of the system and are often relatively simple. Automata rules related to well known behavior such as trend following and mean reversal are extracted.

  12. Rule Inference for Financial Prediction using Recurrent Neural Networks

    Authors: C. Lee Giles, Steve Lawrence, Ah Chung Tsoi
    Proceedings of IEEE/IAFE Conference on Computational Intelligence for Financial Engineering (CIFEr), IEEE, Piscataway, NJ, 1997, pp. 253 259. Copyright IEEE.

    This paper considers the prediction of noisy time series data, specifically, the prediction of foreign exchange rate data. A novel hybrid neural network algorithm for noisy time series prediction is presented which exhibits excellent performance on the problem. The method is motivated by consideration of how neural networks work, and by fundamental difficulties with random correlations when dealing with small sample sizes and high noise data. The method permits the inference and extraction of rules. One of the greatest complaints against neural networks is that it is hard to figure out exactly what they are doing this work provides one answer for the internal workings of the network. Furthermore, these rules can be used to gain insight into both the real world system and the predictor. This paper focuses on noisy time series prediction and rule inference use of the system in trading would typically involve the utilization of other financial indicators and domain knowledge.

  13. Overfitting in Neural Networks: Backpropagation, Conjugate Gradient, and Early Stopping

    Authors: Rich Caruana, Steve Lawrence, C. Lee Giles (NIPS 2000)
    Neural Information Processing Systems, Denver, Colorado, November 28 30, 2000.

    The conventional wisdom is that backprop nets with excess hidden units generalize poorly. We show that nets with excess capacity generalize well when trained with backprop and early stopping. Experiments suggest two reasons for this: 1) Overfitting can vary significantly in different regions of the model. Excess capacity allows better fit to regions of high non-linearity, and backprop often avoids overfitting the regions of low non-linearity. 2) Regardless of size, nets learn task subcomponents in similar sequence. Big nets pass through stages similar to those learned by smaller nets. Early stopping can stop training the large net when it generalizes comparably to a smaller net. We also show that conjugate gradient can yield worse generalization because it overfits regions of low non-linearity when learning to fit regions of high non-linearity.

  14. Neural Network Classification and Prior Class Probabilities

    Authors: Steve Lawrence, Ian Burns, Andrew Back, Ah Chung Tsoi, C. Lee Giles (LNCS State-of-the-Art Surveys)

    Reasons and solutions for the differences between theory and practice when prior class probabilities differ.

  15. Presenting and Analyzing the Results of AI Experiments: Data Averaging and Data Snooping

    Authors: C. Lee Giles, Steve Lawrence
    Proceedings of the Fourteenth National Conference on Artificial Intelligence, AAAI-97, AAAI Press, Menlo Park, California, pp. 362 367, 1997. Copyright AAAI.

    Experimental results reported in the machine learning AI literature can be misleading. This paper investigates the common processes of data averaging (reporting results in terms of the mean and standard deviation of the results from multiple trials) and data snooping in the context of neural networks, one of the most popular AI machine learning models. Both of these processes can result in misleading results and inaccurate conclusions. We demonstrate how easily this can happen and propose techniques for avoiding these very important problems. For data averaging, common presentation assumes that the distribution of individual results is Gaussian. However, we investigate the distribution for common problems and find that it often does not approximate the Gaussian distribution, may not be symmetric, and may be multimodal. We show that assuming Gaussian distributions can significantly affect the interpretation of results, especially those of comparison studies. For a controlled task, we find that the distribution of performance is skewed towards better performance for smoother target functions and skewed towards worse performance for more complex target functions. We propose new guidelines for reporting performance which provide more information about the actual distribution (e.g. box-whiskers plots). For data snooping, we demonstrate that optimization of performance via experimentation with multiple parameters can lead to significance being assigned to results which are due to chance. We suggest that precise descriptions of experimental techniques can be very important to the evaluation of results, and that we need to be aware of potential data snooping biases when formulating these experimental techniques (e.g. selecting the test procedure). Additionally, it is important to only rely on appropriate statistical tests and to ensure that any assumptions made in the tests are valid (e.g. normality of the distribution).

  16. A computational theory of the firm

    Authors: Jason Barr, Francesco Saraceno
    Journal of Economic Behavior & Organization Vol. 49 (2002) 345 361

    This paper proposes using computational learning theory (CLT) as a framework for analyzing the information processing behavior of firms; we argue that firms can be viewed as learning algorithms. The costs and benefits of processing information are linked to the structure of the firm and its relationship with the environment.We model the firm as a type of artificial neural network (ANN).By a simulation experiment, we show which types of networks maximize the net return to computation given different environments. © 2002 Elsevier Science B.V. All rights reserved.

  17. Neural Networks for Density Estimation in Financial Markets

    Authors: Malik Magdon-Ismail and Amir Atiya
    Intelligent Data Engineering and Learning, First International Symposium, pp 171-178, eds. L. Xu, L. W. Chan, I. King and A. Fu, Springer, October, 1998.

    We introduce two new techniques for density estimation. Our approach poses the problem as a supervised learning task which can be performed using Neural Networks. We introduce a stochastic method for learning the cumulative distribution and a analogous deterministic technique. We use this techniques to estimate the densities of log stock price changes, demonstrating that the density is fat-tailed contrary to the Black-School model which assumes it to be Gaussian.

  18. Cournot Competition, Organization and Learning

    Authors: Jason Barr and Francesco Saraceno
    Keywords: Firm Learning, Neural Networks, Cournot Competition
    JEL Classification: C63, D83, L13, L25

    We model firms output decisions in a repeated duopoly framework focusing on three interrelated issues: (1) the role of learning in the adjustment process toward equilibrium, (2) the role of organizational structure in organizational decision making, and (3) the role of changing environmental conditions on learning and output decisions. We model the firm as a type of artificial neural network, which must estimate its optimal output decision based on signals it receives from the economic environment (which influence the demand function). Via simulation analysis we show: (1) how organizations learn to estimate the optimal output over time as a function of the environmental dynamics; (2) which networks are optimal given the environmental dynamics; and (3) the equilibrium industry structure.

  19. Applying Artificial Neural Networks to Business, Economics and Finance

    Author: Dr. Yochanan Shachmurove
    Departments of Economics The City College of the City University of New York and, The University of Pennsylvania

    This paper surveys the significance of recent work on emulative neural networks (ENNs) by researchers across many disciplines in the light of issues of indeterminacy. Financial and economic forecasters have witnessed the recent development of a number of new forecasting models. Traditionally, popular forecasting techniques include regression analysis, time-series analysis, moving averages and smoothing methods, and numerous judgmental methods. However, all of these have the same drawback insofar as they require assumptions about the form of population distribution. Regression models, for example, assume that the underlying population is normally distributed. ENNs are members of a family of statistical techniques, as are flexible nonlinear regression models, discriminant models, data reduction models, and nonlinear dynamic systems. They are trainable analytic tools that attempt to mimic information processing patterns in the brain. Because they do not necessarily require assumptions about population distribution, economists, mathematicians and statisticians are increasingly using ENNs for data analysis.

  20. Anticorrelations and subdiffusion in financial systems

    Authors: Kestutis Staliunas
    Subj-class: Disordered Systems and Neural Networks; Statistical Mechanics; Computational Engineering, Finance, and Science

    Statistical dynamics of financial systems is investigated, based on a model of a randomly coupled equation system driven by a stochastic Langevin force. Anticorrelations of price returns, and subdiffusion of prices is found from the model, and and compared with those calculated from historical $/EURO exchange rates.

  21. Neural Network Applications in Stock Market Predictions - A Methodology Analysis

    Author: Marijana Zekic, MS
    University of Josip Juraj Strossmayer in Osijek Faculty of Economics Osijek Gajev trg 7, 31000 Osijek Croatia
    Keywords: neural networks applications, stock market, qualitative comparative analysis, NN methodology, benefits, limitations

    Neural networks (NNs), as artificial intelligence (AI) methods, have become very important in making stock market predictions. Much research on the applications of NNs for solving business problems have proven their advantages over statistical and other methods that do not include AI, although there is no optimal methodology for a certain problem. In order to identify the main benefits and limitations of previous methods in NN applications and to find connections between methodology and problem domains, data models, and results obtained, a comparative analysis of selected applications is conducted. It can be concluded from analysis that NNs are most implemented in forecasting stock prices, returns, and stock modeling, and the most frequent methodology is the Backpropagation algorithm. However, the importance of NN integration with other artificial intelligence methods is emphasized by numerous authors. Inspite of many benefits, there are limitations that should be investigated, such as the relevance of the results, and the "best" topology for the certain problems.

  22. Structure Optimization of Neural Networks in Relation to Underlying Data

    Author: Marijana Zekic, MS
    University of Josip Juraj Strossmayer in Osijek Faculty of Economics Osijek Gajev trg 7, 31000 Osijek Croatia
    Keywords: structure optimization of neural networks, cascading, pruning, variable selection, principal component analysis

    Optimization of neural network topology has been one of the most important problems since neural network came in front as a method for prediction, classification, and association. Number of heuristics formulas for determining the number of hidden units were developed (Masters,T., 1993, Marcek, D., 1997), and some algorithms for structure optimization were suggested, such as cascading, pruning, A* algorithm, and others. The connection between optimization techniques and underlying data in the model is not investigated enough. The paper deals with the influence of variable selection and statistics of input and output variables to several algorithms for structure optimization. Principal component analysis and analysis of variance among other statistical tests are conducted in stock return prediction models. The predictive power of neural networks is captured, and also the sensitivity of the dependent variables to changes in the inputs.

Home Downloads Online Library Links Contacts