QUdata logo


Download Analyzer
Mail Navigator

Stat Calc
Y(X) Finder
2D Plotter
3D Plotter
Science Calc

Data Mining
Machine Learning
Genetic Algorithms
Fuzzy logic




  1. Forecast and event control: On what is and what cannot be possible

    Authors: Karl Svozil
    Comments: Paper presented on the Workshop on "Determinism" at Ringberg Castle, Germany, July 6th, 2001
    Subj-class: Data Analysis, Statistics and Probability; General Physics

    Consequences of the basic and most evident consistency requirement-that measured events cannot happen and not happen at the same time-are shortly reviewed. Particular emphasis is given to event forecast and event control. As a consequence, particular, very general bounds on the forecast and control of events within the known laws of physics are derived. These bounds are of a global, statistical nature and need not affect singular events or groups of events.

  2. Forecasting the SST space-time variability of the Alboran Sea with genetic algorithms

    Authors: Alberto Alvarez, Cristobal Lopez, Margalida Riera, Emilio Hernandez-Garcia, Joaquin Tintore
    Comments: 15 pages, 3 figures; latex compiled with agums.sty
    Subj-class: Chaotic Dynamics; Atmospheric and Oceanic Physics
    Journal-ref: Geophysical Research Letters 27, 739-742 (2000).

    We propose a nonlinear ocean forecasting technique based on a combination of genetic algorithms and empirical orthogonal function (EOF) analysis. The method is used to forecast the space-time variability of the sea surface temperature (SST) in the Alboran Sea. The genetic algorithm finds the equations that best describe the behaviour of the different temporal amplitude functions in the EOF decomposition and, therefore, enables global forecasting of the future time-variability.

  3. Time Series Forecasting: A Nonlinear Dynamics Approach

    Authors: Stefano Sello
    Comments: Postscript v1.2, 22 pages with 12 color figures
    Report-no: Termo Fluid Dynamics Research Center Enel Research USG/180699
    Subj-class: Data Analysis, Statistics and Probability; Chaotic Dynamics

    The problem of prediction of a given time series is examined on the basis of recent nonlinear dynamics theories. Particular attention is devoted to forecast the amplitude and phase of one of the most common solar indicator activity, the international monthly smoothed sunspot number. It is well known that the solar cycle is very difficult to predict due to the intrinsic complexity of the related time behaviour and to the lack of a successful quantitative theoretical model of the Sun magnetic cycle. Starting from a previous recent work, we checked the reliability and accuracy of a forecasting model based on concepts of nonlinear dynamical systems applied to experimental time series, such as embedding phase space, Lyapunov spectrum, chaotic behaviour. The model is based on a local hypothesis of the behaviour on the embedding space, utilizing an optimal number k of neighbour vectors to predict the future evolution of the current point with the set of characteristic parameters determined by several previous parametric computations. The performances of this method suggest its valuable insertion in the set of the so-called statistical-numerical prediction techniques, like Fourier analysis, curve fitting, neural networks, climatological, etc. The main task is to set up and to compare a promising numerical nonlinear prediction technique, essentially based on an inverse problem, with the most accurate predictive methods like the so-called "precursor methods" which appear now reasonably accurate in predicting "long term" Sun activity, with particular reference to the "solar" precursor methods based on a solar dynamo theory.

  4. Time Series Forecasting: A Multivariate Stochastic Approach

    Authors: Stefano Sello
    Comments: Postscript v1.1, 6 pages with 3 figures
    Report-no: Termo Fluid Dynamics Research Center Enel Research NSG/260199
    Subj-class: Data Analysis, Statistics and Probability

    This note deals with a multivariate stochastic approach to forecast the behaviour of a cyclic time series. Particular attention is devoted to the problem of the prediction of time behaviour of sunspot numbers for the current 23th cycle. The idea is to consider the previous known n cycles as n particular realizations of a given stochastic process. The aim is to predict the future behaviour of the current n+1th realization given a portion of the curve and the structure of the previous n realizations. The model derived is based on the cross-correlations between the current n+1th realization and the previous n ones and the solution of the related least squares problem. As example we applied the method to smoothed monthly sunspots numbers from SIDC archives, in order to predict the behaviour of the current 23th solar cycle.

  5. Currency Exchange Rate Forecasting from News Headlines

    Authors: Desh Peramunetilleke Raymond K. Wong
    School of Computer Science & Engineering University of New South Wales Sydney, NSW 2052, Australia
    Keywords: Data mining, foreign exchange, prediction

    We investigate how money market news headlines can be used to forecast intraday currency exchange rate movements. The innovation of the approach is that, unlike analysis based on quantifiable information, the forecasts are produced from text describing the current status of world financial markets, as well as political and general economic news. In contrast to numeric time series data textual data contains not only the effect (e.g., the dollar rises against the Deutschmark) but also the possible causes of the event (e.g., because of a weak German bond market). Hence improved predictions are expected from this richer input. The output is a categorical forecast about currency exchange rates: the dollar moves up, remains steady or goes down within the next one, two or three hours respectively. On a publicly available commercial data set the system produces results that are significantly better than random prediction. The contribution of this research is the smart modeling of the prediction problem enabling the use of content rich text for forecasting purposes.

  6. Modeling and Forecasting Realized Volatility

    Authors: Torben G. Anderson, Tim Bollerslev, Francis X. Diebold and Paul Labys
    KEYWORDS: Continuous-time methods, quadratic variation, realized volatility, realized correlation, highfrequency data, exchange rates, vector autoregression, long memory, volatility forecasting, correlation forecasting, density forecasting, risk management, value at risk.

    We provide a general framework for integration of high-frequency intraday data into the measurement, modeling, and forecasting of daily and lower frequency return volatilities and return distributions. Most procedures for modeling and forecasting financial asset return volatilities, correlations, and distributions rely on potentially restrictive and complicated parametric multivariate ARCH or stochastic volatility models. Use of realized volatility constructed from high-frequency intraday returns, in contrast, permits the use of traditional time-series methods for modeling and forecasting. Building on the theory of continuous-time arbitrage-free price processes and the theory of quadratic variation, we develop formal links between realized volatility and the conditional covariance matrix. Next, using continuously recorded observations for the Deutschemark / Dollar and Yen / Dollar spot exchange rates covering more than a decade, we find that forecasts from a simple long-memory Gaussian vector autoregression for the logarithmic daily realized volatilities perform admirably compared to a variety of popular daily ARCH and more complicated high-frequency models. Moreover, the vector autoregressive volatility forecast, coupled with a parametric lognormal-normal mixture distribution implied by the theoretically and empirically grounded assumption of normally distributed standardized returns, produces well-calibrated density forecasts of future returns, and correspondingly accurate quantile predictions. Our results hold promise for practical modeling and forecasting of the large covariance matrices relevant in asset pricing, asset allocation and financial risk management applications.

  7. Modeling Chaotic Behavior of Stock Indices Using Intelligent Paradigms

    Authors: Ajith Abraham, Ninan Sajith Philip and P. Saratchandran
    Department of Computer Science, Oklahoma State University, Tulsa, Oklahoma 74106, USA, Email: ajith.abraham@ieee.org
    Key words: connectionist paradigm, support vector machine, neural network, difference boosting, neuro-fuzzy, stock market.

    The use of intelligent systems for stock market predictions has been widely established. In this paper, we investigate how the seemingly chaotic behavior of stock markets could be well represented using several connectionist paradigms and soft computing techniques. To demonstrate the different techniques, we considered Nasdaq-100 index of Nasdaq Stock MarketSM and the S&P CNX NIFTY stock index. We analyzed 7 year s Nasdaq 100 main index values and 4 year s NIFTY index values. This paper investigates the development of a reliable and efficient technique to model the seemingly chaotic behavior of stock markets. We considered an artificial neural network trained using Levenberg-Marquardt algorithm, Support Vector Machine (SVM), Takagi-Sugeno neurofuzzy model and a Difference Boosting Neural Network (DBNN). This paper briefly explains how the different connectionist paradigms could be formulated using different learning methods and then investigates whether they can provide the required level of performance, which are sufficiently good and robust so as to provide a reliable forecast model for stock market indices. Experiment results reveal that all the connectionist paradigms considered could represent the stock indices behavior very accurately.

  8. Liquidity Supply and Demand: Empirical Evidence from the Vancouver Stock Exchange

    Author: Burton Hollifield, Patrik Sandas and Robert A. Miller

    We analyze the costs and benefits of providing and using liquidity in a limit order market. Using a large and comprehensive data set which details the complete histories of orders and trades on the Vancouver Stock Exchange, we are able to model the order flow and measure market liquidity as it changes over time. We accomplish this by constructing a measure of the expected net payoffs to demanding or supplying liquidity, and using our data on order arrivals and placement decisions to make inferences about the traders' demand for liquidity and the cost of entering orders in the market. Our results show that liquidity demand is indeed time varying, and is related to several key observable measures of market characteristics. Furthermore, we find evidence of unexploited profit opportunities in the market, perhaps implying that traders do not continuously monitor the market for profitable trades.

  9. Technical Analysis and Liquidity Provision

    Authors: Kenneth A Kavajecz and Elizabeth R. Odders-White

    The apparent conflict between the level of resources dedicated to technical analysis by practitioners and academic theories of market efficiency is a long-standing puzzle. We offer an alternative explanation for the value of technical analysis that is consistent with market efficiency specifically, that it reveals information about liquidity provision. We find evidence consistent with the hypotheses that support and resistance levels coincide with peaks in depth on the limit order book and that moving average forecasts reveal information about the relative position of depth on the book. These results demonstrate that technical analysis can have value even in an efficient market, and provide a practical method for estimating the level of liquidity on the book.

  10. Credibility of Management Forecasts

    Authors: Jonathan L. Rogers and Phillip C. Stocken
    The Wharton School University of Pennsylvania Philadelphia PA 19104-6365

    This paper examines the credibility of management earnings forecasts. With regard to how forecast bias varies with manager incentives, we establish that when it is more difficult for market participants to detect forecast bias, financially distressed firms are more optimistic than healthy firms and firms in concentrated industries are more pessimistic than those in less concentrated industries. With regard to the stock price response to forecasts, we find that the market's immediate response varies with the predicted bias in good news but not in bad news forecasts. The market's subsequent response, however, is consistent with it eventually identifying the bias in bad news forecasts and modifying its valuation of the firm in the appropriate direction.

  11. Technical Analysis and Mutual Funds. Testing Trading Rules

    Authors: Sotiris Zontos, Skiadas Christos and Yiannis Valvis
    Technical University of Crete University Campus, Kounoupidiana, 73100 Chania, Greece *Phone: +30-821-40115, Fax: +30-821-37362
    KEYWORDS: Technical Analysis, Mutual Funds, Moving Average Rule, Decision Making

    This paper attempts to develop strategies that enable portfolio managers to improve market timing by learning to recognize leading indications of forthcoming changes. The aim of this study is the testing, in a Mutual Fund series, of the predicting ability of a popular technical exchange tool, the Moving Average Rule. A short-term and a long-term moving average are used for the creation of buy and sell signals of mutual funds. 2891 predictions were made, for the same time-series, for different values of short-term and long-term moving averages and the profitability of this method was calculated. The method was proved profitable, if no buy and sell cost was counted. The Two Moving Average Rule by itself is efficient only for the companies that administrate the respective mutual fund and not for the single investor. It is presented the triple moving average rule, which possibly can be a solution for this problem.

  12. Daily Stock Market Forecast from Textual Web Data

    Authors: B. Wuthrich, V. Cho, S. Leung, D. Permunetilleke, K. Sankaran, J. Zhang, W. Lam
    The Hong Kong University of Science and Technology Clear Water Bay, Hong Kong

    Data mining can be described as making better use of data . Every human being is increasingly faced with unmanageable amounts of data, hence, data mining or knowledge discovery apparently affects all of us. It is therefore recognized as one of the key research areas. Ideally, we would like to develop techniques for making better use of any kind of data for any purpose . However, we argue that this goal is too demanding yet. It may sometimes be more promising to develop techniques applicable to specific data and with a specific goal in mind. In this paper, we describe such an application driven data mining system. Our aim is to predict stock markets using information contained in articles published on the Web. Mostly textual articles appearing in the leading and influential financial newspapers are taken as input. From those articles the daily closing values of major stock market indices in Asia, Europe and America are predicted. Textual statements contain not only the effect (e.g. the stocks plummet) but also why it happened (e.g. because of weakness in the dollar and consequently a weakening of the treasury bonds). Exploiting textual information in addition to numeric time series data increases the quality of the input. Hence improved predictions are expected. The forecasts are available real-time via www.cs.ust.hk/~beat/Predict daily at 7:45 am Hong Kong time. Hence all predictions are ready before Tokyo, Hong Kong and Singapore, the major Asian markets, start trading. The system s accuracy for this tremendously difficult but also extremely challenging application is highly promising.

  13. High- and Low-Frequency Exchange Rate Volatility Dynamics: Range-Based Estimation of Stochastic Volatility Models

    Author: Sassan Alizadeh, Michael W. Brandt and Francis X. Diebold

    We propose using the price range in the estimation of stochastic volatility models. We show theoretically, numerically, and empirically that the range is not only a highly efficient volatility proxy, but also that it is approximately Gaussian and robust to microstructure noise. The good properties of the range imply that range-based Gaussian quasi-maximum likelihood estimation produces simple and highly efficient estimates of stochastic volatility models and extractions of latent volatility series. We use our method to examine the dynamics of daily exchange rate volatility and discover that traditional one-factor models are inadequate for describing simultaneously the high- and low-frequency dynamics of volatility. Instead, the evidence points strongly toward two-factor models with one highly persistent factor and one quickly mean-reverting factor.

  14. Consumer Sentiment: Its Rationality and Usefulness in Forecasting Expenditure-Evidence from the Michigan Micro Data

    Author: Nicholas S. Souleles
    The Rodney L. White Center for Financial Research
    The Wharton School University of Pennsylvania 3254 Steinberg Hall-Dietrich Hall 3620 Locust Walk Philadelphia, PA 19104-6367
    Keywords: Consumer sentiment (confidence); Permanent income hypothesis; Excess sensitivity; Rational expectations, Forecast errors, Shocks; Unobserved heterogeneity.

    This paper provides one of the first comprehensive analysis of the household data underlying the Michigan Index of Consumer Sentiment. This data is used to test the rationality of consumer expectations and to assess their usefulness in forecasting expenditure. The results can also be interpreted as characterizing the shocks that have hit different types of households over time. Expectations are found to be biased, at least ex post, in that forecast errors did not average out even over a sample period lasting almost 20 years. People underestimated the disinflation of the early 1980's and in the 1990's, and generally appear to underestimate the severity of business cycles. Forecasts are also inefficient, in the people's forecast errors are correlated with their demographic characteristics and/or aggregate shocks did not hit all people uniformly.

    Further, sentiment is found to be useful in forecasting future consumption, even controlling for lagged consumption and macro variables like stock prices. This excess sensitivity is counter to the permanent income hypothesis [PIH]. Higher confidence is correlated with less saving, consistent with precautionary motives and increases in expected future resources. Some of the rejection of the PIH is found to be due to the systematic demographic components in forecast errors. But even after controlling for these components, some excess sensitivity persists. More broadly, these results suggest that empirical implementations of forward-looking models need to better account for systematic heterogeneity in forecast errors.

  15. Takeovers, Freezouts, and Risk Arbitrage

    Author: Armando Gomes

    This paper develops a dynamic model of tender offers in which there is trading on the target's shares during the takeover, and bidders can freeze out target shareholders (compulsorily acquire remaining shares not tendered at the bid price), features that prevail on almost all takeovers. We show that trading allows for the entry of arbitrageurs with large blocks of shares who can hold out a freeze-a threat that forces the bidder to offer a high preemptive bid. There is also a positive relationship between the takeover premium and arbitrageurs' accumulation of shares before the takeover announcement, and the less liquid the target stock, the strong this relationship is. Moreover, freezeouts eliminate the free-rider problem, but front-end loaded bids, such as two-tiered and partial offers, do not benefit bidders because arbitrageurs can undo any potential benefit and eliminate the coerciveness of these offers. Similarly, the takeover premium is also largely unrelated to the bidder's ability to dilute the target's shareholders after the acquisition, also due to potential arbitrage activity.

  16. Price Discovery in the U.S. Treasury Market: The Impact of Orderflow and Liquidity on the Yield Curve

    Authors: Michael W. Brandt and Kenneth A. Kavajecz

    We examine the role of price discovery in the U.S. Treasury market through the empirical relationship between orderflow, liquidity, and the yield curve. We find that orderflow imbalances (excess buying or selling pressure) can account for as much as 26 percent of the day-to-day variation in yields on days without major macroeconomic announcements. The effect of orderflow on yields is permanent and strongest when liquidity is low. All of the evidence points toward an important role of price discovery in understanding the behavior of the yield curve.

  17. The Effects of a Baby Boom on Stock Prices and Capital Accumulation in the Presence of Social Security

    Author: Andrew B. Abel
    The Wharton School of the University of Pennsylvania and National Bureau of Economic Research
    Keywords: baby boom, Social Security, stock prices, Golden Rule

    Is the stock market boom a result of the baby boom? This paper develops an overlapping generations model in which a baby boom is modeled as a high realization of a random birth rate, and the price of capital is determined endogenously by a convex cost of adjustment. A baby boom increases national saving and investment and thus causes an increase in the price of capital. The price of capital is meanreverting so the initial increase in the price of capital is followed by a decrease. Social Security can potentially a.ect national saving and investment, though in the long run, it does not a.ect the price of capital.

  18. How Does the Internet Affect Trading? Evidence from Investor Behavior in 401(K) Plans

    Author: James J. Choi, David Laibson and Andrew Metrick

    We analyze the impact of a Web-based trading channel on trader behavior and performance in two large corporate 401(k) plans. After 18 months of Web access, trading frequency at the sample firms doubles relative to a control group of firms without a Web channel. Web trades tend to be smaller than trades made through other channels and Web traders tend to have smaller portfolios than other traders, so the Web's impact on portfolio turnover is substantially smaller than its effect on trading frequency. There is no evidence that any of this new trading on the Web is successful; if anything, Web traders underperform in their market-timing trades. We find no evidence of a Web impact on "speculative behavior" such as positive feedback trading, herding, or short-term trading. While Web traders do differ from other traders in some of these speculative activities, these differences appear to be driven by selection effects rather than caused by the Web.

  19. Non-linear financial time series forecasting Application to the Bel 20 stock market index

    European Journal of Economic and Social Systems 14 N 1 (2000) 81-91

    We developed in this paper a method to predict time series with non-linear tools. The specificity of the method is to use as much information as possible as input to the model (many past values of the series, many exogenous variables), to compress this information (by a non-linear method) in order to obtain a state vector of limited size, facilitating the subsequent regression and the generalization ability of the forecasting algorithm and to fit a non-linear regressor (here a RBF neural network) on the reduced vectors. We show that this method is able to find non-linear relationships in artificial and real-world financial series. On a difficult task, which consists in forecasting the tendency of the Bel 20 stock market index, we show that this method improves the results compared both to linear models and to non-linear ones where the non-linear compression is not used.

  20. Can the Neuro Fuzzy Model Predict Stoke Indexes Better than its Rivals?

    Authors: Chin-Shien Lin, Haider A. Khan, Chi-Chung Huang
    Key words: linear, nonlinear, KD indexes, buy and hold, neuro fuzzy

    This paper develops a model of a trading system by using neuro fuzzy framework in order to better predict the stock index. Thirty well-known stock indexes are analyzed with the help of the model developed here. The empirical results show strong evidence of nonlinearity in the stock index by using KD technical indexes. The trading point analysis and the sensitivity analysis of trading costs show the robustness and opportunity for making further profits through using the proposed nonlinear neuro fuzzy system. The scenario analysis also shows that the proposed neuro fuzzy system performs consistently over time.

Home Downloads Online Library Links Contacts