State and effectiveness of Continuous Alphazero algorithms?

Hello,

I am interested in using MCTS based RL algorithms for an environment with continuous state and action-spaces. Moerland et al. proposed a variation of Alphazero for this setting in 2018. The paper only got 61 citations (mostly from reviews), so it seems like this did not get widely adopted.

So I’d like to know if someone has tried this algorithm, or is aware of other approaches for continuous action-spaces.

submitted by /u/Playmad37
[link] [comments]

Leave a Reply

The Future Is A.I. !
To top
en_USEnglish