DRD Stock Forecast

Outlook: DRD is assigned short-term B2 & long-term B2 estimated rating.
AUC Score : What is AUC Score?
Short-term Tactic1 :
Dominant Strategy :
Time series to forecast n: for Weeks2
ML Model Testing : Transductive Learning (ML)
Hypothesis Testing : Stepwise Regression
Surveillance : Major exchange and OTC

1Short-term revised.

2Time series is updated based on short-term trends.


Key Points

This exclusive content is only available to premium users.

About DRD

This exclusive content is only available to premium users.
DRD
This exclusive content is only available to premium users.

ML Model Testing

F(Stepwise Regression)6,7= p a 1 p a 2 p 1 n p j 1 p j 2 p j n p k 1 p k 2 p k n p n 1 p n 2 p n n X R(Transductive Learning (ML))3,4,5 X S(n):→ 4 Weeks R = r 1 r 2 r 3

n:Time series to forecast

p:Price signals of DRD stock

j:Nash equilibria (Neural Network)

k:Dominated move of DRD stock holders

a:Best response for DRD target price

 

For further technical information as per how our model work we invite you to visit the article below: 

How do KappaSignal algorithms actually work?

DRD Stock Forecast (Buy or Sell) Strategic Interaction Table

Strategic Interaction Table Legend:

X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)

Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)

Z axis (Grey to Black): *Technical Analysis%

This exclusive content is only available to premium users.
Rating Short-Term Long-Term Senior
OutlookB2B2
Income StatementCBa3
Balance SheetB1B2
Leverage RatiosBaa2Caa2
Cash FlowBaa2B3
Rates of Return and ProfitabilityCC

*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?

References

  1. M. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, New York, 1994.
  2. Vilnis L, McCallum A. 2015. Word representations via Gaussian embedding. arXiv:1412.6623 [cs.CL]
  3. Chamberlain G. 2000. Econometrics and decision theory. J. Econom. 95:255–83
  4. V. Borkar. Q-learning for risk-sensitive control. Mathematics of Operations Research, 27:294–311, 2002.
  5. M. J. Hausknecht. Cooperation and Communication in Multiagent Deep Reinforcement Learning. PhD thesis, The University of Texas at Austin, 2016
  6. R. Sutton and A. Barto. Reinforcement Learning. The MIT Press, 1998
  7. S. J. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, Englewood Cliffs, NJ, 3nd edition, 2010

This project is licensed under the license; additional terms may apply.