AUC Score :
Short-term Tactic1 :
Dominant Strategy :
Time series to forecast n:
ML Model Testing : Statistical Inference (ML)
Hypothesis Testing : Chi-Square
Surveillance : Major exchange and OTC
1Short-term revised.
2Time series is updated based on short-term trends.
Key Points
This exclusive content is only available to premium users.About S&P/TSX Index
This exclusive content is only available to premium users.
ML Model Testing
n:Time series to forecast
p:Price signals of S&P/TSX index
j:Nash equilibria (Neural Network)
k:Dominated move of S&P/TSX index holders
a:Best response for S&P/TSX target price
For further technical information as per how our model work we invite you to visit the article below:
How do KappaSignal algorithms actually work?
S&P/TSX Index Forecast Strategic Interaction Table
Strategic Interaction Table Legend:
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
| Rating | Short-Term | Long-Term Senior |
|---|---|---|
| Outlook | B2 | Ba3 |
| Income Statement | Baa2 | Ba3 |
| Balance Sheet | C | Baa2 |
| Leverage Ratios | B2 | C |
| Cash Flow | Baa2 | B3 |
| Rates of Return and Profitability | C | Baa2 |
*An aggregate rating for an index summarizes the overall sentiment towards the companies it includes. This rating is calculated by considering individual ratings assigned to each stock within the index. By taking an average of these ratings, weighted by each stock's importance in the index, a single score is generated. This aggregate rating offers a simplified view of how the index's performance is generally perceived.
How does neural network examine financial reports and understand financial state of the company?
References
- J. Hu and M. P. Wellman. Nash q-learning for general-sum stochastic games. Journal of Machine Learning Research, 4:1039–1069, 2003.
- Imbens GW, Rubin DB. 2015. Causal Inference in Statistics, Social, and Biomedical Sciences. Cambridge, UK: Cambridge Univ. Press
- Bertsimas D, King A, Mazumder R. 2016. Best subset selection via a modern optimization lens. Ann. Stat. 44:813–52
- C. Wu and Y. Lin. Minimizing risk models in Markov decision processes with policies depending on target values. Journal of Mathematical Analysis and Applications, 231(1):47–67, 1999
- A. Y. Ng, D. Harada, and S. J. Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the Sixteenth International Conference on Machine Learning (ICML 1999), Bled, Slovenia, June 27 - 30, 1999, pages 278–287, 1999.
- Mikolov T, Chen K, Corrado GS, Dean J. 2013a. Efficient estimation of word representations in vector space. arXiv:1301.3781 [cs.CL]
- Akgiray, V. (1989), "Conditional heteroscedasticity in time series of stock returns: Evidence and forecasts," Journal of Business, 62, 55–80.