AI News - Week 40

news.jpg

In this post, we’ll be covering three of the main stories that emerged this week.

While there is an accompanying video at the end of this post, the purpose of this article is to quickly summarise some of the most important advances and events in the world of Artificial Intelligence.

IBM invests $240 million in A.I

The first story is that of IBM and the establishment of a research lab that focuses on Artificial Intelligence. IBM and the Massachusetts Institute of Technology (MIT) have agreed a partnership that will span over ten years and is facilitated by a $240 million investment from IBM. The largest long-term partnership of its kind, it enables over 100 scientists to focus their research on AI in areas such as algorithm development, hardware improvement, exploration of societal/economic benefits and the identification of industries in which AI has potential. John Kelly III, Senior VP for Cognitive Solutions at IBM admitted that “today’s AI systems, as remarkable as they are, will require new innovations to tackle increasingly difficult real-world problems to improve our work and lives.” Both parties hope to address this through the partnership.

Facebook AI models human expressions with thousands of skype video conversations

In other AI news, Facebook’s AI has recently learned human reactions. Researchers from Facebook’s in-house AI Lab created a bot that learnt and mimicked human expressions. The reactions were expressed through an animation which was controlled by an artificially intelligent algorithm. To teach the algorithm human-like expressions, it was exposed to several hundreds of Skype video conversations. The algorithm split the faces it saw into 68 different sections, which it then analysed and monitored throughout the conversations. The resulting bot is said to have been able to watch a human speaking on video and produce an appropriate facial expression in real-time. To test the system, the team showed a panel various animations of the bot reacting to a human, and a human reacting to a human. The group came to the conclusion that the bot and human were equally realistic. This work was then presented at the International Conference on Intelligent Robots and Systems in Vancouver.

Intel creates chip that resembles the human brain

Intel were also eager to publicise advancements in their own AI program. The semiconductor manufacturers began experimenting with neuromorphic chips: computer chips that try to resemble how a real brain functions. Dubbed the Intel ‘Loihi’ chip, it consists of 128 cores which contain 1024 artificial neurons in each core. When added up, the chip has over 130,000 neurons and 130 million synaptic connections. So what does this mean to the world of AI? Well, being that the chip is now synaptically more powerful than the brain of a lobster, it can learn in real-time, so the need for giant datasets is eradicated. Researchers believe there are other potential use cases in devices that need to learn on the go, e.g. autonomous cars and drones, cameras that search for missing persons and traffic lights that that automatically adapt to the traffic conditions. The chip is also significantly more efficient than a traditional computer chip, as the chip doesn’t consume energy unless it’s called into action. Intel have claimed that hardware would be nearly 1000 times more energy efficient than the current solution in training artificially intelligent systems. The first finished iteration is expected to be completed in November of this year, however it is unlikely that the finished product will make its way out of the research lab until 2020-2022.

That concludes our first AI News blog. AI is evidently advancing further and further, which is extremely encouraging for everyone in and out of the industry. If you enjoyed reading this blog post, check out our YouTube channel! We discuss a variety of topics that include artificial intelligence, machine learning, image/video recognition and much more.