AUC Score :
Short-term Tactic1 :
Dominant Strategy :
Time series to forecast n:
ML Model Testing : Transductive Learning (ML)
Hypothesis Testing : Statistical Hypothesis Testing
Surveillance : Major exchange and OTC
1Short-term revised.
2Time series is updated based on short-term trends.
Key Points
This exclusive content is only available to premium users.About ATX Index
This exclusive content is only available to premium users.
ML Model Testing
n:Time series to forecast
p:Price signals of ATX index
j:Nash equilibria (Neural Network)
k:Dominated move of ATX index holders
a:Best response for ATX target price
For further technical information as per how our model work we invite you to visit the article below:
How do KappaSignal algorithms actually work?
ATX Index Forecast Strategic Interaction Table
Strategic Interaction Table Legend:
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
| Rating | Short-Term | Long-Term Senior |
|---|---|---|
| Outlook | Ba3 | B1 |
| Income Statement | Ba3 | B3 |
| Balance Sheet | Ba1 | Baa2 |
| Leverage Ratios | Baa2 | Caa2 |
| Cash Flow | B3 | Caa2 |
| Rates of Return and Profitability | B3 | Baa2 |
*An aggregate rating for an index summarizes the overall sentiment towards the companies it includes. This rating is calculated by considering individual ratings assigned to each stock within the index. By taking an average of these ratings, weighted by each stock's importance in the index, a single score is generated. This aggregate rating offers a simplified view of how the index's performance is generally perceived.
How does neural network examine financial reports and understand financial state of the company?
References
- S. Bhatnagar and K. Lakshmanan. An online actor-critic algorithm with function approximation for con- strained Markov decision processes. Journal of Optimization Theory and Applications, 153(3):688–708, 2012.
- Bennett J, Lanning S. 2007. The Netflix prize. In Proceedings of KDD Cup and Workshop 2007, p. 35. New York: ACM
- Bottou L. 1998. Online learning and stochastic approximations. In On-Line Learning in Neural Networks, ed. D Saad, pp. 9–42. New York: ACM
- G. Konidaris, S. Osentoski, and P. Thomas. Value function approximation in reinforcement learning using the Fourier basis. In AAAI, 2011
- Cheung, Y. M.D. Chinn (1997), "Further investigation of the uncertain unit root in GNP," Journal of Business and Economic Statistics, 15, 68–73.
- C. Claus and C. Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. In Proceedings of the Fifteenth National Conference on Artificial Intelligence and Tenth Innovative Applications of Artificial Intelligence Conference, AAAI 98, IAAI 98, July 26-30, 1998, Madison, Wisconsin, USA., pages 746–752, 1998.
- Bickel P, Klaassen C, Ritov Y, Wellner J. 1998. Efficient and Adaptive Estimation for Semiparametric Models. Berlin: Springer