POST Stock Forecast

Outlook: POST is assigned short-term B2 & long-term B1 estimated rating.
AUC Score : What is AUC Score?
Short-term Tactic1 :
Dominant Strategy :
Time series to forecast n: for Weeks2
ML Model Testing : Inductive Learning (ML)
Hypothesis Testing : Logistic Regression
Surveillance : Major exchange and OTC

1Short-term revised.

2Time series is updated based on short-term trends.


Key Points

This exclusive content is only available to premium users.

About POST

This exclusive content is only available to premium users.
POST
This exclusive content is only available to premium users.

ML Model Testing

F(Logistic Regression)6,7= p a 1 p a 2 p 1 n p j 1 p j 2 p j n p k 1 p k 2 p k n p n 1 p n 2 p n n X R(Inductive Learning (ML))3,4,5 X S(n):→ 4 Weeks e x rx

n:Time series to forecast

p:Price signals of POST stock

j:Nash equilibria (Neural Network)

k:Dominated move of POST stock holders

a:Best response for POST target price

 

For further technical information as per how our model work we invite you to visit the article below: 

How do KappaSignal algorithms actually work?

POST Stock Forecast (Buy or Sell) Strategic Interaction Table

Strategic Interaction Table Legend:

X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)

Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)

Z axis (Grey to Black): *Technical Analysis%

This exclusive content is only available to premium users.
Rating Short-Term Long-Term Senior
OutlookB2B1
Income StatementBaa2Ba3
Balance SheetCBaa2
Leverage RatiosBaa2Caa2
Cash FlowCaa2Ba3
Rates of Return and ProfitabilityCC

*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?

References

  1. Thomas P, Brunskill E. 2016. Data-efficient off-policy policy evaluation for reinforcement learning. In Pro- ceedings of the International Conference on Machine Learning, pp. 2139–48. La Jolla, CA: Int. Mach. Learn. Soc.
  2. Thompson WR. 1933. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika 25:285–94
  3. E. van der Pol and F. A. Oliehoek. Coordinated deep reinforcement learners for traffic light control. NIPS Workshop on Learning, Inference and Control of Multi-Agent Systems, 2016.
  4. Hartford J, Lewis G, Taddy M. 2016. Counterfactual prediction with deep instrumental variables networks. arXiv:1612.09596 [stat.AP]
  5. M. Petrik and D. Subramanian. An approximate solution method for large risk-averse Markov decision processes. In Proceedings of the 28th International Conference on Uncertainty in Artificial Intelligence, 2012.
  6. T. Morimura, M. Sugiyama, M. Kashima, H. Hachiya, and T. Tanaka. Nonparametric return distribution ap- proximation for reinforcement learning. In Proceedings of the 27th International Conference on Machine Learning, pages 799–806, 2010
  7. E. Collins. Using Markov decision processes to optimize a nonlinear functional of the final distribution, with manufacturing applications. In Stochastic Modelling in Innovative Manufacturing, pages 30–45. Springer, 1997

This project is licensed under the license; additional terms may apply.