The paper I’ve chosen to review to start the week is quite an interesting one. It is called Probabilistic Neural Programs, and it promises to be a relevant attempt at bridging the gap between the state of the art in deep neural networks and the current developments taking place in the emerging computing paradigm of Probabilistic Programming.
As the authors point out at the outset, the current state of the art in deep learning frameworks exhibits limitations or suboptimality in the trade-offs between expressivity in the computational models and the data requirements to meet those computational demands. On the other hand deep learning is a continuous approximating algorithmic setting, while the evidence is suggesting that discrete inference algorithms outperforms continuous approximations.
With all this in mind the researchers publish in this post a view that claims Probabilistic Programming as a paradigm that both supports the…
View original post 1,209 more words