Samsung Research America hosted Samsung AI Summit on January 9th, at its Mountain View campus with 9 visionary speakers in AI research, from University of Berkeley, University of Washington, OpenAI, Facebook, Microsoft, Nvidia, Uber, AutoX, Graphcore and Wave Computing.  Topics covered variety areas of entire stack of AI technologies, from emerging application, algorithm, framework and enabling hardware.  Below are the links to the videos of the Samsung AI Summit 2017.

 

AI in The Enterprise -- Making Corporations Smart Again

20170109 094044 

Danny Lange | VP, AI and Machine Learning Unity Technologies

Have you noticed how applications seem to get smarter?  Apps make recommendations based on past purchases; you get an alert from your bank when they suspect a fraudulent transaction; and you receive emails from your favorite store when items related to things you typically buy are on sale.  These examples of application intelligence use a technology called Machine Learning.  Understanding the algorithms behind Machine Learning is difficult and running the infrastructure needed to build accurate models and use these models at scale is very challenging.  At Uber and Amazon Danny’s teams have built Machine Learning services that easily allow business teams to embed intelligence into their applications that can perform important functions such as ETA, fraud detection, churn prediction, forecasting demand, and much more. 

VIDEO:  AI-Summit-2017-Danny-Lange-Large-540p.mov

PRESENTATION

             

End-to-end Deep Learning for Robotics

20170109 104601 

Pieter Abbeel | Professor, UC Berkeley, Researcher, OpenAI

Deep learning has enabled significant advances in supervised learning problems such as speech recognition and visual recognition.  One of the key characteristics of these advances is their end-to-end nature: a deep neural net is trained to map all the way from raw sensory inputs to classification.  In this talk Pieter will highlight some recent results in end-to-end learning of deep representations with direct applications in robotics: state estimation, visual-motor policies for manipulation and for flight, and inverse optimal control.

VIDEO:  AI-Summit-2017-Pieter-Abbeel-Large-540p.mov

PRESENTATION

Conversational AI

20170109 113341

Deng Li | Chief Scientist of AI, Microsoft

Conversational systems, also called spoken (or text) dialogue systems or currently with a more fashionable name of AI bots, have a long history of research and commercialization.  There are three basic approaches with their respective strengths and weaknesses: symbolic-rule or template based (more popular before late 90’s), statistical and data-driven approaches using (shallow) machine learning, and those using deep learning (starting from around 2014).  This talk will provide an overview on the conversational systems and a few related research projects that Deng’s research team have been pursuing at Microsoft AI and Research Division.

VIDEO:  AI-Summit-2017-Deng-Li-Large-540p.mov

PRESENTATION

             

Deep Learning Acceleration with Parallel Computing  

20170109 145755 

Bryan Catanzaro | VP, Applied Deep Learning Nvidia

Training and deploying state of the art deep neural networks is very computationally intensive, with tens of exa-flops needed to train a single model on a large dataset.  The high density compute afforded by modern GPUs has been key to many of the advances in AI over the past few years.  However, researchers need more than a fast processor – they also need optimized libraries, and tools to efficiently program so that they can experiment with new ideas.  They also need scalable systems that use many of these processors together to train a single model.  In this talk, Bryan will discuss platforms for deep learning, and how NVIDIA is working to make the platforms of the future for Deep Learning.

VIDEO:  Ai-Summit-2017-Bryan-Catanzaro-Large-540p.mov

PRESENTATION

How to Build a Highly Efficient Processor for Machine

20170109 153029

Nigel Toon | CEO, Graphcore

The presentation will cover some of the key compute challenges and processor architecture challenges that need to be solved to build a highly efficient processor for artificial intelligence and machine learning.

VIDEO:  AI-Summit-2017-Nigel-Toon-Large-540p.mov

             

Machine Learning Acceleration

20170109 163710

Chris Nicol | CTO, Wave Computing

Data scientists have made tremendous advancements in their abilities to enhance business models and service offerings across industry verticals using deep neural networks (DNNs).  But training DNNs can take a week or more using traditional hardware solutions that rely on legacy architectures which are limited in performance and scalability.  Imagine the new innovations that can be realized in Machine Learning when training DNNs is reduced from weeks to days, or when days are reduced to hours (even seconds) for both graphic-centric and text-centric models. 

VIDEO:  AI-Summit-2017-Chris-Nicol-Large-540p.mov

PRESENTATION

Learning Multimodal Knowledge About Common Life Scenarios

20170109 095348

Yejin Choi | Assistant Professor, University of Washington

Online images and text provide rich records of human lives, events, and activities.  In this talk, Yejin will survey some of our recent attempts to learn various aspects of commonsense knowledge from naturally existing multi-modal web data.  In all these projects, a recurring theme is the use of naturally existing multi-modal web data to obtain domain-specific co, and how such knowledge can help improve related downstream tasks.  Yejin will also briefly highlight our new text generation model with global coherence ``neural checklist models'', that can track what have been already discussed, and what need to be yet discussed, via a pair of attention mechanisms. 

PRESENTATION

             

Self-Driving Car

20170109 131527

Jianxiong Xiao | CEO, AutoX

Jianxiong will address challenges in self-driving and how direct perception approach simplifies self driving with the affordance indicator.  He argues that images can train the neural network to generate important factors in self driving, such as speed limit, distance, drivability, etc.  Jianxiong will also share some of the accomplishments of self driving capabilities his company have made with this approach in the first 4 months since inception.

What does AI mean Mobile?

20170109 140107

Yangqing Jia | Deep Learning Research Lead Facebook

Yangqing will cover why mobile can be a good platform for AI.  He will also demonstrate new augmented reality filters generated by AI.  He will show how open source frameworks along with large scale infrastructure and enable AI for both mobile and cloud.  He will share the advantages of deploying AI on mobile and challenges that remain.

             

Panel Discussion

20170109 170049

Panelists: Bryan Catanzaro, Nigel Toon, Danny Lange, Deng Li, Yangqing Jia, Chris Nicol

Moderated by Michael Wei, Steve Eliuk

This panel will go over the key questions regarding each of their perspectives on the future of Artificial Intellgience:

  • Will inferencing architecture be different on the cloud for training?
  • How do we scale Reinforcement Learning?
  • What’s next and how do we continue forward?

VIDEO:  AI-Summit-2017-Panel-Discussion-Closing-Remarks-Large-540p.mov

 

CONTACT

Michael Wei (m1.wei@samsung.com) | Director of AI Research at Samsung Research America

Sponsored by Samsung Research America

Organized by Michael Wei, Damon Moon, and Ophelia Yeung