AUC Score :
Short-term Tactic1 :
Dominant Strategy :
Time series to forecast n:
ML Model Testing : Transductive Learning (ML)
Hypothesis Testing : Factor
Surveillance : Major exchange and OTC
1Short-term revised.
2Time series is updated based on short-term trends.
Key Points
This exclusive content is only available to premium users.About LEXX
This exclusive content is only available to premium users.
ML Model Testing
n:Time series to forecast
p:Price signals of LEXX stock
j:Nash equilibria (Neural Network)
k:Dominated move of LEXX stock holders
a:Best response for LEXX target price
For further technical information as per how our model work we invite you to visit the article below:
How do KappaSignal algorithms actually work?
LEXX Stock Forecast (Buy or Sell) Strategic Interaction Table
Strategic Interaction Table Legend:
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
Rating | Short-Term | Long-Term Senior |
---|---|---|
Outlook | Caa2 | B1 |
Income Statement | C | Caa2 |
Balance Sheet | C | Caa2 |
Leverage Ratios | B3 | Baa2 |
Cash Flow | Caa2 | Baa2 |
Rates of Return and Profitability | C | B1 |
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
References
- A. Tamar, D. Di Castro, and S. Mannor. Policy gradients with variance related risk criteria. In Proceedings of the Twenty-Ninth International Conference on Machine Learning, pages 387–396, 2012.
- G. Shani, R. Brafman, and D. Heckerman. An MDP-based recommender system. In Proceedings of the Eigh- teenth conference on Uncertainty in artificial intelligence, pages 453–460. Morgan Kaufmann Publishers Inc., 2002
- L. Busoniu, R. Babuska, and B. D. Schutter. A comprehensive survey of multiagent reinforcement learning. IEEE Transactions of Systems, Man, and Cybernetics Part C: Applications and Reviews, 38(2), 2008.
- T. Shardlow and A. Stuart. A perturbation theory for ergodic Markov chains and application to numerical approximations. SIAM journal on numerical analysis, 37(4):1120–1137, 2000
- D. S. Bernstein, S. Zilberstein, and N. Immerman. The complexity of decentralized control of Markov Decision Processes. In UAI '00: Proceedings of the 16th Conference in Uncertainty in Artificial Intelligence, Stanford University, Stanford, California, USA, June 30 - July 3, 2000, pages 32–37, 2000.
- Cheung, Y. M.D. Chinn (1997), "Further investigation of the uncertain unit root in GNP," Journal of Business and Economic Statistics, 15, 68–73.
- J. Z. Leibo, V. Zambaldi, M. Lanctot, J. Marecki, and T. Graepel. Multi-agent Reinforcement Learning in Sequential Social Dilemmas. In Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2017), Sao Paulo, Brazil, 2017