A bridge between deep neural networks and probabilistic programming

The Intelligence of Information

The paper I’ve chosen to review to start the week is quite an interesting one. It is called Probabilistic Neural Programs, and it promises to be a relevant attempt at bridging the gap between the state of the art in deep neural networks and the current developments taking place in the emerging computing paradigm of Probabilistic Programming.

As the authors point out at the outset, the current state of the art in deep learning frameworks exhibits limitations or suboptimality in the trade-offs between expressivity in the computational models and the data requirements to meet those computational demands. On the other hand deep learning is a continuous approximating algorithmic setting, while the evidence is suggesting that discrete inference algorithms outperforms continuous approximations.

Probabilistic Neural Programs 

With all this in mind the researchers publish in this post a view that claims Probabilistic Programming as a paradigm that both supports the…

View original post 1,209 more words

Goldman Sachs summary on Cobots — RobotEnomics

Goldman Sachs (“GS”) has released a series of research reports in 2016 centered on The Factory of the Future. The series which they call ‘Profiles in Innovation’ examines six technologies GS believe is driving transition, from “Cobots” to 3D printing to Virtual and Augmented Reality to the Internet of Things, and how these technologies could yield […]

via Goldman Sachs summary on Cobots — RobotEnomics

TensorFlow Meets Microsoft’s CNTK

The eScience Cloud

CNTK is Microsoft’s Computational Network Toolkit for building deep neural networks and it is now available as open source on Github.   Because I recently wrote about TensorFlow I thought it would be interesting to study the similarities and differences between these two systems.   After all, CNTK seems to be the reigning champ of many of the image recognition challenges.   To be complete I should also look at Theano, Torch and Caffe.   These three are also extremely impressive frameworks.   While this study will focus on CNTK and TensorFlow I will try to return to the others in the future.   Kenneth Tran has a very nice top level (but admittedly subjective) analysis of all five deep learning tool kits here.  This will not be a tutorial about CNTK or Tensorflow.  Rather my goal is to give a high level feel for how they compare from the programmer’s perspective. …

View original post 3,926 more words

Autonomous Technology and the Greater Human Good

Self-Aware Systems

Here is a preprint of:

Omohundro, Steve (forthcoming 2013) “Autonomous Technology and the Greater Human Good”, Journal of Experimental and Theoretical Artificial Intelligence (special volume “Impacts and Risks of Artificial General Intelligence”, ed. Vincent C. Müller).



Military and economic pressures are driving the rapid development of autonomous systems.  We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives toward self-protection, resource acquisition, replication, and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives. We describe the use of formal methods to create provably safe but limited autonomous systems. We then discuss harmful systems and how to stop them. We conclude with a description of  the “Safe-AI Scaffolding Strategy” for creating powerful safe systems with a high confidence of…

View original post 6 more words

Formal Methods for AI Safety

Self-Aware Systems

Future intelligent systems could cause great harm to humanity. Because of the large risks, if we are to responsibly build intelligent systems, they must not only be safe but we must be very convinced that they are safe. For example, an AI which is taught human morality by reinforcement learning might be safe, but it’s hard to see how we could become sufficiently convinced to responsibly deploy it.

Before deploying an advanced intelligent system, we should have a very convincing argument for its safety. If that argument is rigorously correct, then it is a mathematical proof. This is not a trivial endeavor! It’s just the only path that appears to be open to us.

The mathematical foundations of computing have been known since Church and Turing‘s work in 1936. Both created computational models which were simultaneously logical models about which theorems could be proved. Church created the lambda calculus

View original post 360 more words