Theme day: Artificial Intelligence
Welcome November 27th to a full day focusing on Artificial Intelligence (AI). The day explores the ongoing global paradigm shift to smart data-driven technologies and applications in all industries. Speakers from leading academia and industry present the latest developments in AI, Machine Learning, and Scalable Platforms, and their implications for next-generation smart data-driven products, services, production, and automation.
The theme day is part of the Software Week: On the bleeding edge of technology, November 26th - 28th 2018. Join one day or several days and take the opportunity to boost your knowledge in all three areas - Multicore, Artificial Intelligence and Cyber Security.
Read more about the presentations
Daniel Gillblad, RISE
Artificial Intelligence and Machine Learning have made enormous progress during the last decade, and AI techniques are already widely used in products and services. AI is today generally recognized as the core of advanced digitalization, and will fundamentally change industry and society. With this as our starting point, we have launched RISE AI. RISE AI brings together AI researchers, companies and authorities to make Sweden the best in the world to utilize AI technologies.
We will describe the RISE AI stack, combining cutting edge research, testbeds, platforms, methods and applications, and the RISE AI ecosystem. We will discuss what is possible to do with AI today, what problems remain to be solved to fully realize the value of AI, and how RISE AI will contribute.
Kalle Åström, Lund University
Two research themes within Computer Vision has been particularly successful.
The first one concerns the estimation of 3D structure and camera position using images. Although many of these problems were studied by mathematicians since the mid 19’th century, the peak of reseach activity in this area was 1990-2010. The second theme is that of image understanding. This was already a rich and flourishing area in 2011-2012 when deep learning revolutionized the field. In this talk I will give examples of research within both of these research themes.
Elena Fersman, Ericsson
Industries such as automotive, transportation and manufacturing present typical examples of cyber-physical systems, where the physical world is linked with the virtual world aiming at creating a desired global behavior in a collaborative manner. Such systems are becoming increasingly connected, providing new cross-domain business opportunities. However, ubiquitous connectivity and heterogeneity of systems pose new challenges, as vast amounts of data and information from many sources will need to be analyzed, combined and actioned in a safe, ethical and transparent way. This creates complexity that goes beyond the capabilities of human management, and hereby, a need for intelligent automation. We will discuss AI-powered automated management of cyber-physical systems and provide examples ranging from telecom to smart cities.
Philipp Moritz, UC Berkeley
Over the past decade, the bulk synchronous processing (BSP) model has proven highly effective for processing large amounts of data. However, today we are witnessing the emergence of a new class of applications, i.e., AI workloads. These applications exhibit new requirements, such as nested parallelism and highly heterogeneous computations. To support such workloads, we have developed Ray, a distributed system which provides both task-parallel and actor abstractions. Ray is highly scalable employing an in-memory storage system and a distributed scheduler. In this talk, I will discuss some of our design decisions and our experience with using Ray to implement a variety of applications and libraries.
Martin Schmid, DeepMind
AI research has a long history of using games as a measure of progress towards intelligent machines, but attention has been focused primarily on perfect information games, like checkers, chess or go. But in real world, one has to deal with imperfect information.
Poker is the quintessential game of imperfect information, where you and your opponent hold information that each other doesn't have (your cards).
DeepStack bridges the gap between AI techniques for games of perfect information—like checkers, chess and Go—with ones for imperfect information games–like poker–to reason while it plays using “intuition” honed through deep learning to reassess its strategy with each decision.
DeepStack is the first theoretically sound application of heuristic search methods—which have been famously successful in games like checkers, chess, and Go—to imperfect information games.
With a study published in Science in March 2017, DeepStack became the first AI capable of beating professional poker players at heads-up no-limit Texas hold'em poker.
Adam Paszke, PyTorch
Python is very well known for its ecosystem of mature scientific computing packages. Despite that, the rapidly rising popularity of deep learning resulted in creation of a number of new libraries, including PyTorch. Although originally they were meant to provide better support for those domain specific use cases, one can come to a conclusion, that they can actually have wider applications.
In this talk, I’ll showcase the main ideas behind PyTorch - a library focusing on usability and good integration with other Python packages. I’ll cover some interesting use cases, ranging from ones more specific to machine learning, to those more generally applicable in other scientific computing areas. I’ll also cover some recently added features, and talk a bit about our future roadmap.
Salla Franzén, SEB
The expectations on how AI and machine learning will change society and industry vary between individuals, regions and countries. In the midst of the ongoing hype we will go back to basics, and look at some of the history of recommender engines, starting from simple models and ending in new methods of improving them, for both finance and other industries.
Martin Nilsson, RISE
A common view today is that AI equals Deep Learning. The current AI boom focuses overwhelmingly on pattern recognition, which can be seen as a form of function approximation. In theory, this is an extremely general paradigm, but in practice, it has fundamental limitations, for instance, spatial complexity. Are even larger memories and faster computers the solution? Or is there some other approach? What else would be required to implement "true" intelligence? Can perhaps biology provide some hints?
|09.00 - 09.15||Daniel Gillblad, RISE|
|09.15 - 10.00||Xiaowei Jiang, Alibaba|
|10.00 - 10.30||Kalle Åström, Lund University|
|10.30 - 11.00||Pause|
|11.00 - 11.30||Elena Fersman, Ericsson|
|11.30 - 12.00||Philipp Moritz, UC Berkeley|
|12.00 - 13.30||Lunch|
|13.30 - 14.00||Martin Schmid, DeepMind|
|14.00 - 14.30||Adam Paszke, PyTorch|
|14.30 - 15.00||Pause|
|15.00 - 15.30||Salla Franzén, SEB|
|15.30 - 16.00||Martin Nilsson, RISE|
|16.00 - 16.15||Summary|