Presented By
GTCx LogoMenu
Presented by


  • Dr David B. Kirk

    NVIDIA Fellow
    NVIDIA

    David B. Kirk is an NVIDIA Fellow and served as NVIDIA’s chief scientist from 1997 to 2009, a role in which he led the development of graphics architecture and technology. Kirk was honored by the California Institute of Technology (Caltech) in 2009 with a Distinguished Alumni Award, its highest honor, for his work in the graphics-technology industry. In 2006, he was elected to the National Academy of Engineering for his role in bringing high-performance graphics to PCs. He also received the SIGGRAPH Computer Graphics Achievement Award in 2002 for his role in bringing high-performance computer graphics systems to the mass market.

    David is the inventor of more than 75 patents and patent applications relating to graphics design and has published many articles on graphics technology and parallel programming. He is also the author of the popular parallel programming textbook “Programming Massively Parallel Processors,” along with co-author Wen-mei Hwu. He holds BS and MS degrees in mechanical engineering from the Massachusetts Institute of Technology, and MS and PhD degrees in computer science from Caltech.



  • Professor Anton van den Hengel

    Director
    Australian Centre for Visual Technologies (ACVT)

    Anton van den Hengel is the Director of the Australian Centre for Visual Technologies (ACVT), the Program Lead for the Analytics and Decision Support Program of the Data 2 Decisions CRC, and a Professor of Computer Science at the University of Adelaide.  Prof. van den Hengel has published over 200 papers, been a CI on over $50m in research funding, and leads a group of over 60 researchers working in Computer Vision and Machine Learning.  Prof van den Hengel has received a number of awards, including the Pearcey Award for Innovation and the CVPR Best Paper Award in 2010.



  • Dr Mark Sagar

    Academy Award Winner, Director of the Laboratory for Animate Technologies
    Auckland Bioengineering Institute

    Academy Award winner Dr. Mark Sagar is the director of the Laboratory for Animate Technologies at the Auckland Bioengineering Institute and CEO/Co-founder of Soul Machines Ltd.

    Mark is interested in bringing digital characters to life using artificial nervous systems to empower the next generation of human computer interaction. Mark’s work is pioneering neurobehavioral animation that combines biologically based models of faces and neural systems to create live, naturally intelligent, and highly expressive interactive systems. Mark previously worked as the Special Projects Supervisor at Weta Digital and Sony Pictures Imageworks and developed technology for the characters in blockbusters such as Avatar, King Kong, and Spiderman 2. He has co-directed research and development for Pacific Title/Mirage and Life F/X Technologies which led groundbreaking development of realistic digital humans for film and eCommerce applications driven by artificial intelligence. His pioneering work in computer-generated faces was recognized with two consecutive Scientific and Engineering Oscars in 2010 and 2011. Mark holds a Ph.D. in Bioengineering



  • Dr Le Lu

    Scientist
    Department of Radiology and Imaging Sciences, National Institutes of Health (NIH) Clinical Center, USA

    Le Lu is a staff scientist in Department of Radiology and Imaging Sciences, National Institutes of Health (NIH) Clinical Center (CC), Bethesda, Maryland since 2013. His research is focused on medical image understanding and semantic parsing to fit into "revolutionary" clinical workflow practices, especially in the area of preventive cancer early detection and diagnosis via large scale imaging protocols and statistical (Deep) learning principles. He worked on various core R&D problems in colonic polyp and lung nodule CADx systems, and vessel, bone imaging at Siemens Corporate Research and Siemens Healthcare from Oct. 2006 until Jan. 2013, and his last post was a senior staff scientist. He is the (co-)inventor of 16 US/International patents, 30 inventions and has authored or coauthored more than 80 peer reviewed papers (many appeared in tier-one journals and conferences). He has given 20+ invited lectures or talks at prestigious academic and industrial institutions. He received his Ph.D. degree in Computer Science from Johns Hopkins University in May 2007, combined with early research training at Microsoft Research. He won the Mentor of the Year award (staff scientist/staff clinician category) at NIH in 2015, and the best summer intern mentor award from NIH-CC in 2013. He served as a program committee member for MICCAI 2015, 2016 and will serve as an Area Chair for IEEE CVPR 2017. For more details, refer to www.cs.jhu.edu/~lelu.



  • Professor Tom Drummond

    Leader of Computer Vision Lab
    Monash University

    Prof Tom Drummond has been a principal investigator on several EU Framework projects and is a chief investigator in the ARC Centre of Excellence for Robotic Vision. Tom studied mathematics for his B.A. at the University of Cambridge. In 1989, he emigrated to Australia and worked for CSIRO in Melbourne for four years before moving to Perth for his Ph.D. in computer science at Curtin University. In 1998, he returned to Cambridge as a postdoctoral research associate and in 1991 was appointed as a university lecturer and was subsequently promoted to senior university lecturer. In 2010, he returned to Melbourne and took up a professorship at Monash University.



  • Dr Jose Alvarez

    Computer Vision Researcher
    Data61, CSIRO

    Dr Jose M. Alvarez is a computer vision researcher at Data61 at CSIRO (formerly NICTA) in the Smart Vision Systems group (Australia) working on large-scale dynamic scene understanding and deep learning.

    Dr Alvarez graduated with his Ph.D. from Autonomous University of Barcelona (UAB) in October 2010. While pursuing his Ph.D. at UAB (2006-2010) Dr Alvarez was funded through research, teaching assistantships and industrial projects. During his Ph.D. program, Dr Alvarez visited the ISLA group at the University of Amsterdam (in 2008 and 2009), and the Group Research Electronics at Volkswagen (in 2010). Dr Alvarez was awarded the best Ph.D. Thesis award in 2010 from the Autonomous University of Barcelona. Subsequently, Dr Alvarez worked as a postdoctoral researcher at the Courant Institute of Mathematical Science, New York University. Since 2014, Dr Alvarez serves as associate editor for IEEE Transactions on Intelligent Transportation Systems.



  • Dr Mark Harris

    Chief Technologist for GPU Computing
    NVIDIA

    Mark Harris is Chief Technologist for GPU Computing at NVIDIA, where he works as a developer advocate and helps drive NVIDIA's GPU computing software strategy. His research interests include parallel computing, general-purpose computation on GPUs, physically based simulation, and real-time rendering. Mark founded www.GPGPU.org while he was earning his PhD in computer science from the University of North Carolina at Chapel Hill. Mark brews his own beer and cures his own bacon in Brisbane, Australia, where he lives with his wife and daughter.



  • Alex St John

    CTO, DirectX co-creator
    Nyriad LLC

    Best known for his early work on gaming and creating the DirectX media platform and original Direct3D API at Microsoft in the early 1990’s, Alex St. John later founded WildTangent Inc. one of the world’s largest online game publishing companies and became a technology columnist for leading computer enthusiast publications MaximumPC and CPU Magazine. St. John is also known for creating a technology called MapStream which was sold to Google and eventually became Google Maps. His many exploits as an evangelist during the fast growth years at Microsoft are chronicled in various books including; Renegades of the Empire by Michael Drummond, Masters of Doom by David Kushner and Opening the Xbox by Dean Takahashi. St.John has over 23 patents in streaming media, compression, Digital Rights management, micro currency, AI, and streaming mapping solutions.



  • Dr Mark Suresh Joshi

    Professor
    University of Melbourne

    Mark Suresh Joshi is a researcher and consultant in mathematical finance, and a Professor at the University of Melbourne. His research focuses on derivatives pricing and interest rate derivatives in particular. He is the author of numerous research articles and seven books. He was an assistant lecturer in the department of pure mathematics and mathematical statistics at Cambridge University and a fellow of Darwin college from 1994 to 1999. Following this, he worked for the Royal Bank of Scotland from 1999 to 2005 as a quantitative analyst at a variety of levels, finishing as the Head of Quantitative Research for Group Risk Management. He joined the Centre for Actuarial Studies at the University of Melbourne in November 2005 as an associate professor, and he is now a full professor.



  • Stephanie Brelaz

    Technical Art Lead
    Opaque Media Group

    Stephanie is the Technical Art Lead for the multi-award winning studio Opaque Media Group. There she currently heads the efforts to develop visual effects for and improve performance on studio’s two major Virtual Reality projects - Earthlight and Genesis. Stephanie has years of technical art experience in Unreal Engine 4 and its predecessors and was instrumental in the developing Starlight Renderer, the world’s first fully-deferred physically-based WebGL rendering engine.

    She has worked on some of the low-level components of Unreal Engine’s renderer for photorealistic skin and eye shading for a SIGGRAPH demo in 2015.



  • Norman Wang

    Executive Director
    Opaque Media Group

    Norman is an award-winning 3D artist and Executive Director at Opaque Media Group where he leads a number of projects in the area of real-time production and virtual reality. The studio’s latest project, Earthlight, is a VR game being developed in collaboration with space agencies around the world that allows players to immersively experience the journey of becoming an astronaut as well as the wonders and perils of space.

    Norman led a number of R&D efforts on Earthlight, including VR performance optimisations using platform-specific features.



  • Delia Hou

    VR Business Development, ANZ
    NVIDIA

    Delia oversees the VR ecosystem in Australia, New Zealand, and parts of Southeast Asia. She has a vision of growing and prospering the field by working with VR developers to become the “VR Heroes” of their country through NVIDIA VRWorks, resources, and networks. She also helps NVIDIA to partner with various VR hubs, associations, meet-up groups, and schools to outreach to all the VR players in each region.

    Her ultimate goal is to let NVIDIA become the VR Heroes’ sidekick, assisting them to create superior VR content on the next level.



  • Dr Gabriel Noaje

    Senior Computational Scientist
    A*STAR Computational Resource Centre (A*CRC) in Singapore

    Dr Gabriel Noaje is a Senior Computational Scientist at the A*STAR Computational Resource Centre (A*CRC) in Singapore. He has more than 7 years of experience in GPU Computing and he is an NVIDIA Certified Programmer. In his position at A*CRC, Dr. Noaje is providing user support in running and optimizing GPU applications at the HPC facility as well as evaluating new hardware accelerators for future acquisitions. He also holds an adjunct senior research engineer position at Advanced Digital Sciences Center (ADSC) of Illinois at Singapore where he previously worked on compilers for accelerators.

    Dr Noaje also has more than 10 years of teaching experience and he worked in collaboration with several universities and private companies to deliver CUDA training and general HPC courses.

    Dr Noaje holds a PhD in Computer Sciences from the University of Reims Champagne-Ardenne, France.



  • Dr Wojtek James Goscinski

    Manager
    High Performance Computing Monash eResearch Center, Monash University

    Wojtek James Goscinski is the Coordinator of the Multimodal Australian ScienceS Imaging and Visualisation Environment (MASSIVE), a specialist Australian high performance computing facility for imaging and visualization, and the External Collaborations Manager at the Monash e-Research Centre a role in which he leads teams to develop effective and creative applications of computing in research. He is the lead applicant on over $3M of competitive and successfully funded research infrastructure projects. He holds a PhD in Computer Science, a Bachelor of Design (Architecture), and a Bachelor of Computer Science.



  • Dr Werner Scholz

    Chief Technology Officer and Head of R&D
    Xenon Technology Group

    Werner is always looking for the best technologies and solutions.

    With a background in nanotechnology and expertise in computational science and high performance computing systems, Werner brings his experience from leading edge research and development organizations in Europe and the US to Xenon. At Xenon he is responsible for product design, research and development and the technical team, who delivers Xenon’s new products and solutions.

    Werner has more than 15 years experience with high performance computing systems – from individual workstations and storage servers to massively parallel HPC clusters and large storage systems. He is also the developer of an open source finite element simulation package, which uses MPI, OpenMP, and GPU parallelization techniques. It is in used by academic and industrial research organizations around the world.

    Werner has a PhD in physics from the Vienna University of Technology in Austria, where he specialized in computational physics and magnetic materials. He is the author of more than 80 journal articles in the area of computational physics and magnetic nanostructures and co-inventor of 12 patents related to magnetic storage technologies.



  • Jeff Cotter

    Teacher
    Academy of Interactive Entertainment

    Jeff is a teacher in game development at Academy of Interactive Entertainment. Before that he worked for twenty years in 3D visualization and simulation, mostly in aerospace, astronomy and scientific engineering. Some of the 3D applications and simulations he has developed have involved military weapons systems, space debris in orbit around the Earth, and simulation of flood emergencies, city infrastructure, and mass population evacuations.



  • Paul Arden

    CEO
    migenius

    Founding partner and former CTO of Luminova (1999-2007), following acquisition of Luminova’s software development arm by NVIDIA/mental images, Paul took up the position of Product Manager for RealityServer and Director of Customer and Application Support at mental images, managing a globally distributed team working with key customers and OEM/OSDs. Bringing over 15 years of professional experience deploying complex 3D graphics applications to the Enterprise and consumers, Paul also holds a Bachelor of Science degree in Mathematical and Information Sciences and has lectured in Advanced Computer Graphics at La Trobe University.



  • Mark Wilcox

    Senior Project Manager
    NYRIAD

    Mark is a New Zealand-based expert on blockchain technology. Mark has been dedicated his career to applying the ideas behind Bitcoin to a wide variety of applications including Agile Project Management and Augmented Reality. Mark works as a Senior Project Manager at NYRIAD where he leads The Ambigraph Project, a next-generation framework for secure Big Data Analytics and High Performance Computing.



  • Gaurav Mitra

    Staff Scientist
    National Computational Infrastructure (NCI)

    As a Staff Scientist at the National Computational Infrastructure (NCI), Gaurav currently supports users of the Raijin Supercomputer and conducts research in emerging High Performance Computing (HPC) platforms such as Intel's Knights Landing. Prior to that, Gaurav was a graduate student at the Australian National University pursuing a PhD in Energy Efficient HPC. Gaurav has experience working with accelerators such as GPUs, DSPs and many-core processors and has previously interned with Texas Instruments to develop the OpenMP Accelerator Model runtime for the TI Keystone II System-on-chip. Gaurav holds a double degree in Software Engineering and Science also from the ANU.



  • Trent Clews-de Castella

    Co-founder & CEO
    Phoria

    Trent previously owned and run a 3D tech consultancy which connected consumers and business with new and emerging immersive technologies. His unique position and understanding afforded him the insight and network to build Scann3d, a creative 3D visualisation company that gives you the ability to virtually experience an any interior space as if you were physically there. Two years on, Phoria has launched several products internationally, each of which enables a new paradigm of interactive journeys that serve to save time, money and bring much need transparency back into online experiences. Phoria is an immersive media start-up that is committed to weaving in multiple human senses into the experiences. Phoria creates and continue to build on our true sense of spatial awareness.



  • Christopher Fluke

    Researcher
    The Centre for Astrophysics & Supercomputing, Swinburne University of Technology

    Associate Professor Christopher Fluke is a researcher with the Centre for Astrophysics & Supercomputing, Swinburne University of Technology. Within the Centre, Chris leads the Scientific Computing & Visualisation research theme. As an early adopter of NVIDIA GPUs in astronomy, Chris has seen the impact that massively parallel computation can have across a range of application areas. Chris is an active science communicator, who speaks regularly to school groups and the general public about astronomy.



  • Dr John McGhee

    Director
    3D Visualisation Aesthetics Lab, UNSW Art & Design

    Dr John McGhee is a 3D Computer Artist, Senior Lecturer and the Director of the 3D Visualisation Aesthetics Lab at UNSW Art & Design. John’s academic research work explores art and design-led modes of visualising complex scientific and biomedical data using 3D computer animation techniques, most recently on Virtual Reality (VR) headsets. His work includes the deployment of VR in stroke rehabilitation, clinical MRI 3D visualisation for patient education and the application of VR in bio-nano cellular data visualisation. This has culminated in John being recognised as one of UNSW Australia’s 21 ‘Rising Stars’.



  • Ryan Olson

    Solutions Architect
    NVIDIA Corporation

    Ryan Olson is a Solutions Architect in the Worldwide Field Organization at NVIDIA. His primary responsibilities involve supporting deep learning and high performance computing applications. Ryan is particularly interested in scalable software design that leverages the unique capabilities of the underlying hardware. Prior to NVIDIA, Ryan spent 8 years working at Cray where he helped architect novel solutions that enabled applications to run at scale on some of the world’s largest supercomptuers including Oak Ridge National Lab’s Jaguar and Titan machines as well as the National Science Foundation’s Blue Waters machine at NCSA. Ryan holds a Ph.D. in Physical Chemistry from Iowa State University where he was a member of the Gordon Group working on the popular GAMESS chemistry package.

    Ryan also spent a semester during his graduate work visiting Australian National University working with Alistair Rendell on novel hybrid communication layers for onesided frameworks.



  • Jake Carroll

    Senior ICT Manager
    Queensland Brain Institute, University of Queensland

    Jake Carroll is the Senior ICT Manager, Research for one of the largest neuroscientific research organisations in the world – the Queensland Brain Institute, the University of Queensland. He has spent the last decade immersed in the engineering and strategy of high performance computing in various roles across the higher education sector with a hard focus on storage and networking at the ‘bleeding edge’ of the Australian scientific research sector. His intent has and continues to be achieving the outcomes for scientific research through judicious and well placed use of next-generation technologies. Jake holds positions in local and international high performance storage and compute design committees, filesystems engineering groups and hardware design think-tanks. Jake holds an honours degree in computer science involving the primitive user interaction measurement of human computer interaction in multi-panel and multi-modal display surfaces and is working towards his PhD.



  • Dr Hon Weng Chong

    CTO & Founder
    CliniCloud Inc

    Dr Hon Weng Chong is the CTO and co-founder of CliniCloud. CliniCloud is a Melbourne Health Technology Startup that designs and develops connected medical devices. Aside from his role at CliniCloud, Hon is a medical doctor and software developer. Hon was introduced to the concept of machine learning and neural networks during his research fellowship at Johns Hopkins Division of Health Sciences Informatics. Since then he has been a firm believer of the transformative power of data and how we can improve the safety and precision of medicine by building intelligent systems.



  • Dr Ajay Kevat

    Paediatric Registrar
    Monash Children’s Hospital

    Dr Ajay Kevat is a children’s doctor trained in Melbourne, Australia. He believes technological solutions are the key to solving diagnostic and management conundrums with greater precision, and will enable him to spend more time with patients and their families, supporting them through the challenges they face. With the Clinicloud team, he has forged a partnership with the Royal Women’s Hospital in order to develop technology to guide neonatal resuscitation. He hopes that his research into paediatric breath sounds captured with digital stethoscopes at the brand new Monash Children’s and Royal Children’s Hospitals will open a new frontier in how to better help children with lung conditions such as asthma.



  • Andy Kitchen

    Data Scientist
    Silverpond

    Andy Kitchen is a crazy programmer with a big heart. When he was a kid, the first program he ever wrote was an “AI.” At the time, he thought this was the coolest thing ever. Later, he realized it was a chatbot. He is now a researcher and consultant in machine learning, specializing in neural networks. He is currently puzzling over how to teach computers to think about thinking and other metaphysical machinations.



  • Allison Gray

    Solutions Architect, Federal Team
    NVIDIA

    Allison Gray is a Solutions Architect in the Federal team at NVIDIA. She supports customers using GPUs for deep learning and geospatial information systems.



  • Miles Macklin

    Senior Research Engineer
    NVIDIA

    A deep dive look at the various aspects of development of NVIDIA’s VR Funhouse. We’ll explore specifics behind the integration of real-time fluid and fire in Unreal Engine 4, the importance of haptic feedback, and the challenges of making high fidelity experiences in VR. This talk will cover both engineering and art related issues that were overcome during the development cycle. Finally, we’ll talk about how developers can leverage VR Funhouse’s source code available from GitHub to create their own immersive VR experiences.



  • Stewart Smith

    OPAL Architect
    Linux Technology Center, IBM

    Stewart is based in Melbourne and works for IBM in the Linux Technology Center as OPAL Architect. OPAL is the OpenPOWER Abstraction Layer, boot and runtime firmware for OpenPOWER systems.



  • Steve Tolnai

    Chief Technologist
    HPC Asia Pacific and Japan, HPE

    Steve Tolnai is Chief Technologist in the Hewlett Packard Enterprise Servers business across Asia Pacific & Japan. In this position he is responsible for all aspects of High Performance Computing technical strategy within Asia Pacific & Japan.



  • Ilija Ilievski

    Ph.D
    NUS

    Ilija is a Ph.D. student at NUS, doing interdisciplinary research in the intersection of vision and language. He believes question answering over multimodal data is the next frontier of deep learning. Thus, his research focuses on Visual Question Answering.

    As a side project, he created deeplearningtutorials.com. A place to share his experience developing deep learning methods for real-world problems, in a hope to clear up the "dark magic" surrounding the development and application of deep learning models to novel problems. Previously he obtained an M.Sc. in Software Engineering for Machine Learning, by developing an intelligent tool for Urban Data Modeling and Simulation.



  • Dr Muhammad Atif

    Manager of the HPC and Cloud systems
    National Computational Infrastructure (NCI)

    Dr Muhammad Atif is Manager of the HPC and Cloud systems at the National Computational Infrastructure (NCI) based at the Australian National University in Canberra, Australia. Dr Atif ‘s team is responsible for all aspects of system administration and management of Australia’s largest HPC System (Raijin) and Australia’s fastest research cloud (Tenjin). His team oversees the efficient operation of the services through careful monitoring and profiling of applications.

    Dr Atif earned his PhD in July 2011 from The Australian National University. The area of research for his PhD was HPC in virtualized environments.



  • Dr Amanda Caples

    Lead Scientist and Deputy Secretary,
    Sector Development Division in the Victorian Department of Economic Development, Jobs, Transport and Resources

    Dr Amanda Caples is the Lead Scientist and Deputy Secretary of the Sector Development Division in the Victorian Department of Economic Development, Jobs, Transport and Resources. In this role she is responsible for the development of priority industry sector strategies.

    Prior to this appointment Amanda has held various roles in the public sector including as the Victorian Government's inaugural Director of Biotechnology. Amanda brings commercial skills to industry and innovation policy having had a successful career in the private sector in executive positions with international and local pharmaceutical companies, specialising in product development, technology transfer and business development (licensing and start-up ventures).

    Amanda has a Bachelor of Science (Honours) and PhD from the University of Melbourne and is a member of the Australian Institute of Company Directors. Amanda is a member of the Ivanhoe Grammar School Board of Governors and the Advisory Board for the Australian Research Council Centre of Excellence for Integrative Brain Function.


Agenda

The first day includes a mixture of keynotes and speakers across a range of topics, from deep learning and AI to virtual reality, and high-performance computing. The second day focuses on hands-on lab training. You will get a certificate of attendance from NVIDIA.

Browse the at-a-glance agenda below and check back often for updates as we confirm more speakers.

Day 1

Day 2

TIME

Session

Location

Room 105 & 106

08:00 - 09:00

Registration & Coffee
Exhibits / VR Village

09:00 - 09:10

Welcome Speech Dr Amanda Caples

09:10 - 10:00

10:40 - 11:10

Morning Break & Exhibits / VR Village

  • Deep Learning Track

    Location

    Room 105 & 106

    11:10 - 12:00

    Deep Neural Networks in Medical Imaging and Radiology: Preventative and Precision Medicine Perspectives Dr Le Lu Scientist, Department of Radiology and Imaging Sciences National Institutes of Health, Clinical Center, USA

    Employing deep learning (DL), especially deep neural networks for high performance radiological or medical image computing is the main focus of this talk. We'll present the motivation, technical details and quantitative results of our recent work at NIH for three core problems: 1) Improving Computer-aided Detection (CAD) using Convolutional Neural Networks and Decompositional Image Representations; 2) Robust Bottom-up Multi-level Deep Convolutional Networks for Automated Organ Segmentation; 3) Text/Image Deep Mining on a Large-Scale Radiology Image Database for Automated Image Interpretation. We validate some very promising observations of using DL to both significantly improve upon traditional CAD tasks in (1) and enable new exciting research directions as (2,3). This presentation is based on 11 recent papers published in MICCAI/CVPR/TMI/JMLR and three filed patents. We would expect their positive impacts in both preventative and precision medicine aspects.

    12:00 - 12:30

    Efficient Deep Networks for Real-Time Classification in Constrained Platforms Dr Jose Alvarez Computer Vision Researcher Data61 at CSIRO

    Employing deep learning (DL), especially deep neural networks for high performance radiological or medical image computing is the main focus of this talk. We'll present the motivation, technical details and quantitative results of our recent work at NIH for three core problems: 1) Improving Computer-aided Detection (CAD) using Convolutional Neural Networks and Decompositional Image Representations; 2) Robust Bottom-up Multi-level Deep Convolutional Networks for Automated Organ Segmentation; 3) Text/Image Deep Mining on a Large-Scale Radiology Image Database for Automated Image Interpretation. We validate some very promising observations of using DL to both significantly improve upon traditional CAD tasks in (1) and enable new exciting research directions as (2,3). This presentation is based on 11 recent papers published in MICCAI/CVPR/TMI/JMLR and three filed patents. We would expect their positive impacts in both preventative and precision medicine aspects.

    12:30 - 13:30

    Lunch and Exhibits / VR Village

    13:30 - 14:00

    We can remember it for you wholesale - using GPUs to accelerate robotic memory access Professor Tom Drummond Leader of Computer Vision Lab Monash University

    Artificial Intelligence systems can benefit from explicit memory that is recalled when a similar set of circumstances apply. This creates a nearest neighbour search problem of large datasets (millions, billions or more) in high dimensional spaces (hundreds, thousands or more). In this work we present our state of the art solution which provides very rapid method of locating the nearest neighbours in a database to a query point with high probability and we show how this can be accelerated using CUDA.

    14:00 - 14:30

    Deep Learning and Accelerated Analytics: Faster, better results, unique insight Dr Werner Scholz CTO and Head of R&D Xenon Technology Group

    Customers are looking to extend the benefits beyond big data with the power of the deep learning and accelerate the insights they can get from data. The NVIDIA® DGX-1™ is the platform of AI Pioneers, which integrates power of deep learning and accelerated analytics together in a single hardware and software system. This session will cover the learnings and successes of real world customer examples for deep learning and accelerated analytics.

    14:30 - 15:00

    Deep Learning Solutions for Visual World Understanding Jake Carroll Senior ICT Manager, Queensland Brain Institute University of Queensland

    In this session Jake will take the audience on a journey of compute ‘generations’ and the challenges that have been met at each turn. He’ll deep dive into some of the research being undertaken at scale within the QBI, demonstrating some of the differences in acceleration technologies, where things have worked and where things have not. A presentation for high level decision makers and hard-technology coal face engineering people alike. The talk will go into detail in the use of deconvolution technology in high throughput microscopy as well as the use of large GPU deployments in the acceleration of Fully Convolutional Networks.

    15:00 - 15:30

    HPE GPU computing, combining the best of Servers and GPUs Steve Tolnai Chief Technologist HPC Asia Pacific and Japan, HPE

    Affordable processor technology has become a much more realistic and attainable goal over the last few years. Seamless integration of GPU computing with HPE ProLiant servers for high-performance computing and visualization, virtual desktop deployments and deep learning deliver all of the benefits of GPU computing while enabling maximum reliability and tight integration with system monitoring and management tools such as HPE Insight Cluster Management Utility.

    This session will covering the broad offering of GPU computing from Hewlett Packard Enterprise for HPC, eVDI & Deep Learning, including customer examples.

    15:30 - 16:00

    Afternoon Break and Exhibits / VR Village

    16:00 - 16:30

    Deep Learning Solutions for Visual World Understanding Ilija Ilievski PhD, Learning and Vision Research Group National University of Singapore

    Deep learning has revolutionized the computer vision research significantly. In this talk, I will first briefly introduce the deep learning approaches developed by my group for solving several fundamental computer vision research problems as well as their applications in practice, including face analytics, human behavior understanding, urban scene understanding and generic object recognition. Then I will concentrate on presenting solutions to several practical issues in these applications in order for letting computers understand the visual world more intelligently, such as how to achieve robustness to various noisy signal, how to integrate contextual information effectively and how to build a network to learn in an endless way. Towards solving these issues, I will introduce three types of new deep neural network models in details, which are recurrent attentive neural networks, multi-path feedback networks, and self-learning networks. Finally, I will conclude by introducing our work on the Visual Question Answering problem, a novel problem that lies at the intersection of computer vision and natural language processing.

    16:30 - 17:00

    DeepBreath: Deep learning for respiratory medicine Dr Hon Weng Chong CTO & Founder CliniCloud Inc Dr Ajay Kevat Paediatric Registrar Monash Children’s Hospital Andy Kitchen Data Scientist Silverpond

    This session will focus on how the intersection of medicine and deep learning are enabling new powerful ways of looking at diseases using data, and how this creates opportunities for health-technology startups to provide never-before-seen services.

    CliniCloud is a health technology startup based in Melbourne that designs and develops connected medical devices for the home. Their kit contains an easy-to-use bluetooth thermometer and digital stethoscope. Paired with a smartphone, users can upload health data to the cloud and access remote diagnostics, wherever they are.

    Silverpond are code-slingers-for-hire, knitting together deep learning, data science and functional programming. Always ready to jump into action, ready to pick up the weird, the innovative and the challenging.

    Combining data obtained from rigorous clinical studies and deep learning, CliniCloud with the assistance of Silverpond are exploring the creation algorithms that can automatically asses, triage and even possibly diagnose diseases.

    17:00 - 17:30

    DIGITS 4: Deep Learning GPU Training System Allison Gray Solutions Architect, Federal team NVIDIA

    In this talk, Allison will present NVIDIA DIGITS 4. NVIDIA DIGITS 4 introduces a new object detection workflow, enabling data scientists to train deep neural networks to find faces, pedestrians, traffic signs, vehicles and other objects in a sea of images. This workflow enables advanced deep learning solutions — such as tracking objects from satellite imagery, security and surveillance, advanced driver assistance systems and medical diagnostic screening.

    When training a deep neural network, researchers must repeatedly tune various parameters to get high accuracy out of a trained model. DIGITS 4 can automatically train neural networks across a range of tuning parameters, significantly reducing the time required to arrive at the most accurate solution.

  • Supercomputing & HPC Track

    Location

    Room 104

    11:10 - 12:00

    Inside Pascal: New Features in NVIDIA's Latest Computing Architecture Dr Mark Harris Chief Technologist for GPU Computing NVIDIA
    The revolutionary NVIDIA® Pascal™ architecture is purpose-built to be the engine of computers that learn, see, and simulate our world—a world with an infinite appetite for computing. Pascal incorporates ground-breaking technologies to deliver the highest absolute performance for HPC, technical computing, deep learning, and many computationally intensive datacenter workloads. In this talk you’ll see how Pascal GPUs provides extreme performance and scaling using the new NVLink high-speed GPU interconnect, HBM2 stacked memory for massive bandwidth, and massive computational throughput for artificial intelligence with new 16-bit floating point instructions. You’ll also learn how Unified Memory in CUDA benefits from Pascal’s new Page Migration Engine to enable easier porting of code to the GPU, as well as oversubscription of GPU memory.

    12:00 - 12:30

    Distributed Computing with block chains Alex St John CTO, DirectX co-creator Nyriad LLC
    Most modern distributed graph processing architectures like Apache Flink, Hadoop, Storm and Spark are written in high level languages sealed away from achieving bare metal computing performance by many layers of sandboxes, VM’s and user space protection layers. Nyriad is launching a new Open Source architecture for HPC graph computing that cuts through all of the layers to provide a new paradigm for HPC big-data processing called an Ambigraph. Inspired by Nyriad’s work on the Square Kilometer Array Telescope which will need to process 80Tb/s of astronomical data in real-time, Ambigraphs are a radical departure from existing graph processing solutions. Ambigraphs rely on highly vectorized modern CPU’s, FPGA’s and GPU’s for their performance and utilize block chain technology to produce computed solutions that are fully auditable, reproducible and error free.

    12:30 - 13:30

    Lunch and Exhibits / VR Village

    13:30 - 14:00

    The Microscope for 21st Century Discovery Dr Wojtek James Goscinski Manager of HPC Monash eResearch Center Monash University
    World-class environments for research require the orchestration of specialised instruments, data storage and processing facilities, and advanced data visualisation environments. The Clayton Innovation Precinct is now home to a world-unique trifecta to support this vision: (1) World-class scientific instruments located at Monash University, CSIRO, Australian Synchrotron and affiliated medical research institutes; (2) Unique data processing capabilities of the MASSIVE HPC facility; and (3) A world-class immersive visualisation environment for data analysis and collaboration (the CAVE2). The way in which scientists apply these three capabilities in concert will be an archetype of the way research will be performed in the 21st Century.

    14:00 - 14:30

    High Density GPU Computing at NCI Dr Muhammad Atif Manager, HPC and Cloud Systems NCI

    NCI, as Australia’s national research computing service, provides world-class, high-end services to Australia’s researchers, the primary objectives of which are to raise the ambition, impact, and outcomes of Australian research through access to advanced, computational and data-intensive methods, support, and high-performance infrastructure.

    This talk presents high density GPU computing solution at NCI powered by Nvidia/Dell that is enabling scientists to drive new insights and discoveries faster than ever. This solution is a step towards Australia’s push towards the exascale computing.

    14:30 - 15:00

    HPC Landscape in Singapore Dr Gabriel Noaje Senior Computational Scientist A*STAR Computational Resource Centre
    This presentation will provide a brief introduction to the new National Supercomputing Centre of Singapore (NSCC) and the A*STAR Computational Resource Centre (A*CRC), with the aim of providing the audience with a better understanding of Singapore's HPC landscape. Several case studies of GPU applications running at NSCC as well as A*CRC will be presented.

    15:00 - 15:30

    Breaking Bottlenecks with IBM POWER8 and NVLink Stewart Smith OPAL Architect Linux Technology Center, IBM

    System end-users, developers, and administrators need advancements in GPU performance, programmability, and in the ability to feed data to GPUs to unlock the next wave of accelerated computing. The differentiated tight binding of POWER8 with NVIDIA Tesla P100 GPUs through NVLink Technology addresses these requirements, resolves the PCI-E bottleneck and unlocks new potential for GPUs across industries.

    This session will provide you with a technical introduction into the technologies and the investments IBM is making in high performance computing with NVIDIA.

    15:30 - 16:00

    Afternoon Break and Exhibits / VR Village

    16:00 - 16:30

    Exploring energy-efficient work partitioning across processing elements on the NVIDIA Tegra X1 and K1 systems Gaurav Mitra Staff Scientist, National Computational Infrastructure The Australian National University

    We demonstrate an energy usage model designed to predict whether it is energy-optimal to partition work across different processing elements in a heterogeneous system and validate this model with results across the low-power NVIDIA Tegra X1 and K1 boards, compared with results across conventional HPC systems with Intel Haswell/Sandy-Bridge CPUs and NVIDIA K80/K20 GPUs.

    Hybrid CPU-GPU Matrix multiplication (SGEMM & DGEMM) is implemented and used for our experiments. An adaptive work partitioning algorithm is used to determine the split that might achieve best performance. While it is evident that using both the CPU and GPU simultaneously results in higher absolute performance than using either device individually, whether it results in higher energy efficiency is an open question. Using our model, we attempt to answer this question and indeed we demonstrate that performance optimal work partitions are not always energy optimal. We then use dynamic voltage and frequency scaling (DVFS) to alter both CPU and GPU frequencies to determine whether higher energy efficiency is achievable at different frequency configurations. Certain trade-offs in achieving optimal energy and performance are demonstrated using DVFS results.

    Our results show that the Tegra X1 system has extremely high energy efficiency in single precision; exceeding that of the conventional HPC systems. Quantitative comparisons in energy and performance to Haswell/Sandy-Bridge + K80/K20 systems are also demonstrated.

    16:30 - 17:00

    Astronomy Accelerated Christopher Fluke Associate Professor, Centre for Astrophysics & Supercomputing Swinburne University of Technology

    Modern astronomy is a petascale enterprise. High performance computing applications in astronomy are enabling complex simulations with many billions of particles, while the forthcoming generation of telescopes will collect data at rates in excess of terabytes per day. The immensity of the data demands new approaches and techniques to ensure that standard analysis tasks can be accomplished at all, let alone in reasonable time. Indeed, many of the basic techniques used to analyse, interpret, and explore data will be pushed beyond their breaking points. New approaches are required now, including harnessing novel technologies such as graphics processing units for massively parallel computation, in order to prepare for this large data paradigm shift. I will discuss the crucial role that NVIDIA GPUs and the CUDA compute platform have played in achieving breakthrough results including interactive visualisation and analysis of terabyte-scale data cubes; accelerated optimisiation and model-fitting for the rotational properties of galaxies from large-scale surveys; and the real-time detection of a new class of astronomical objects called Fast Radio Bursts.

    17:00 - 17:30

    Kooderive: using GPUs and the Libor Market Model with least-squares Monte Carlo to price cancellable swaps Dr Mark Suresh Joshi Professor University of Melbourne

    We discuss how the Monte Carlo pricing of early exercisable financial derivatives can be carried out using a GPU.In particular, we discuss the challenges of adapting  the least squares regression algorithm to the GPU.  We demonstrate > 100 times speed ups over single-thread CPU code.

  • ProViz & VR Track

    Location

    Room 103

    11:10 - 12:00

    Autonomous Animation of Virtual Human Faces Dr Mark Sagar Academy Award Winner, Director of the Laboratory for Animate Technologies University of Auckland

    Our goal is to make human-computer interaction more human by creating artificial nervous systems which drive virtual characters in real-time. The talk will describe a neurobehavioural modeling and visual computing framework for the integration of realistic interactive computer graphics with neural systems modelling, allowing real-time autonomous facial animation and interactive visualization of the underlying neural network models. The system has been designed to integrate realistic computer graphics and interconnect a wide range of cognitive science and computational neuroscience models to bring autonomous artificial humans and characters to life.

    12:00 - 12:30

    Virtual Reality - an exploration of invisible worlds. Dr John McGhee Director of the 3D Visualisation Aesthetics Lab UNSW

    This session explores how art and design-led modes of 3D visualisation can contribute to complex scientific and biomedical data communication. In particular the presentation will unpack how recent developments in Virtual Reality (VR) headset technology will change the way we interact with data.

    12:30 - 13:30

    Lunch and Exhibits / VR Village

    13:30 - 14:00

    Earthlight – Creating One of World’s Most Spectacular VR Experiences in Unreal Engine and VRWorks Norman Wang Executive Director Opaque Media Group Stephanie Brelaz Technical Art Lead Opaque Media Group

    Keeping VR applications performant is one of the most important factors in ensuring a pleasant VR experience. The stringent performance envelope around VR applications has steered developers down a path of stylisation and non-photoreal art style. With Earthlight, our commitment was to give players the most authentic experience of what it’s like to be an astronaut, and the visual realism of the experience is one of the most vital components. In this session, we will discuss the practices and pipelines we use to profile and optimise Earthlight to ensure that we strike the best balance between performance and visual realism. Further, we will discuss the outcome of experimentations with NVIDIA VRWorks and how it is allowing us to push visual fidelity further than ever before.

    14:00 - 14:30

    VR-Weird Simulation Jeff Cotter Teacher Academy of Interactive Entertainment

    Now that Virtual Reality has 'arrived', a lot of attention is being focused on the application of VR to gaming. But perhaps even more interesting is the impact VR will have on the wider non-gaming industry, in particular, on commercial simulation and training. For gamers, VR offers an exciting extension to what they are already used to: being absorbed into an alternate reality. But commercial simulation is typically aimed at the broader populace rather than gamers, and for these users (who often have never experienced VR), Virtual Reality often comes as something of a shock. The sense of presence and the feeling of being transported to another world is palpable and compelling, almost to the point of being startling, or even downright 'weird' (usually in a nice way). The 'startle' effect can greatly contribute to the training value of a VR simulation or training experience.

    In this session, we'll take a look at two 'weird' simulations, each of which goes beyond the world of games to create highly realistic simulations of the real world: a hang gliding simulator (accurate enough to train real hang glider pilots) and a trip back through time to stroll through the central business district of Ancient Rome (accurate to the latest archaeological findings).

    14:30 - 15:00

    From Architecture to Apparel, How Physically Based Rendering on GPUs in the Cloud Power Design and Commerce Paul Arden CEO migenius

    What do architecture, interior design, fashion, furniture, jewelry, automotive design and urban planning all have in common? Using a combination of NVIDIA GPU hardware, NVIDIA Iray rendering software and the concepts of cloud computing, these industries are creating better designs, selling more products and improving quality of life for residents. In this session we will show specific use cases and demonstrate the technology behind these advancements.

    15:00 - 15:30

    Turning “Hyperloop” transportation concept into reality Zachary McClelland Project Director VicHyper

    In this talk you will learn how a team of university students with the help of Lenovo workstations are turning Elon Musk’s revolutionary “Hyperloop” transportation concept into reality.

    15:30 - 16:00

    Afternoon Break and Exhibits / VR Village

    16:00 - 16:25

    The rise of VR and the future of immersive media Trent Clews-de Castella CEO, Co-founder Phoria

    The world as we know it is changing; physical environments are manifesting into the virtual; human perceptions are being augmented through technology and multiple forms of immersive media are converging into new mediums and paradigms. Phoria is an immersive media start-up that is pioneering the next generation of virtual experiences. By leveraging NVIDIA VRWORKS, game engine technology and in-house software development, Phoria design interactive experiences that extend beyond how something looks and towards how it feels. Join Trent as he explores the evolution of immersive media. Learn how innovative tools like 3D scanning, computer vision algorithms and insanely powerful rendering hardware have helped his team build a startup that not only transforms the way people can perceive physical space online but has the ability to transform lives through engaging, immersive and awe inspiring virtual experiences.

    16:25 - 16:40

    Fred Liao, Product Manager, MSI

    16:40 - 17:10

    NVIDIA's VRWorks SDK: Accelerating and Enhancing VR Experiences Delia Hou VR Business Development NVIDIA

    NVIDIA has created a Virtual Reality SDK, called VRWorks, for VR software and hardware developers. VRWorks improves performance, reduces latency, improves compatibility, enables immersive environments, and accelerates 360 video broadcast. Available as a free download from NVIDIA’s developer site, the VRWorks SDK is being used by VR companies across the globe to accelerate and enhance VR applications.

    17:10 - 17:35

    VR Funhouse: A Post Mortem Miles Macklin Senior Research Engineer NVIDIA

    A deep dive look at the various aspects of development of NVIDIA’s VR Funhouse. We’ll explore specifics behind the integration of real-time fluid and fire in Unreal Engine 4, the importance of haptic feedback, and the challenges of making high fidelity experiences in VR. This talk will cover both engineering and art related issues that were overcome during the development cycle. Finally, we’ll talk about how developers can leverage VR Funhouse’s source code available from GitHub to create their own immersive VR experiences.

17:30-18:30

Cocktail Reception & VR Village

TIME

Deep Learning Track

Deep Learning Track

08:00 - 09:00

Registration & Coffee
Lab set-up

9:00 - 11:00

Getting Started with Deep Learning (DIGITS)

Deep learning is giving machines near human levels of visual recognition capabilities and disrupting many applications by replacing hand-coded software with predictive models learned directly from data. This lab introduces the machine learning workflow and provides hands-on experience with using deep neural networks (DNN) to solve a challenging real-world image classification problem. You will walk through the process of data preparation, model definition, model training and troubleshooting, validation testing and strategies for improving model performance. You’ll also see the benefits of GPU acceleration in the model training process. On completion of this lab you will have the knowledge to use NVIDIA DIGITS to train a DNN on your own image classification dataset.

Audience Level: Beginner

Getting Started with Deep Learning (DIGITS)

Deep learning is giving machines near human levels of visual recognition capabilities and disrupting many applications by replacing hand-coded software with predictive models learned directly from data. This lab introduces the machine learning workflow and provides hands-on experience with using deep neural networks (DNN) to solve a challenging real-world image classification problem. You will walk through the process of data preparation, model definition, model training and troubleshooting, validation testing and strategies for improving model performance. You’ll also see the benefits of GPU acceleration in the model training process. On completion of this lab you will have the knowledge to use NVIDIA DIGITS to train a DNN on your own image classification dataset.

Audience Level: Beginner

11:00 - 11:30

Morning Break

11:30 - 13:00

Deep Learning for Object Detection (DIGITS)

Building upon the foundational understanding of how deep learning is applied to image classification, this lab explores different approaches to the more challenging problem of detecting if an object of interest is present within an image and recognizing its precise location within the image. Numerous approaches have been proposed for training deep neural networks for this task, each having pros and cons in relation to model training time, model accuracy and speed of detection during deployment. On completion of this lab you will understand each approach and their relative merits. You’ll receive hands-on training applying cutting edge object detection networks trained using NVIDIA DIGITS on a challenging real-world dataset.

Audience Level: Beginner
Prerequisites: Getting Started with Deep Learning

Deep Learning for Object Detection (DIGITS)

Building upon the foundational understanding of how deep learning is applied to image classification, this lab explores different approaches to the more challenging problem of detecting if an object of interest is present within an image and recognizing its precise location within the image. Numerous approaches have been proposed for training deep neural networks for this task, each having pros and cons in relation to model training time, model accuracy and speed of detection during deployment. On completion of this lab you will understand each approach and their relative merits. You’ll receive hands-on training applying cutting edge object detection networks trained using NVIDIA DIGITS on a challenging real-world dataset.

Audience Level: Beginner
Prerequisites: Getting Started with Deep Learning

13:00 - 14:00

Lunch break

14:00 - 15:30

Deep Learning for Image Segmentation (TensorFlow)

Here are a variety of important applications that need to go beyond detecting individual objects within an image and instead segment the image into spatial regions of interest. For example, in medical imagery analysis it is often important to separate the pixels corresponding to different types of tissue, blood or abnormal cells so that we can isolate a particular organ. In this lab we will use the TensorFlow deep learning framework to train and evaluate an image segmentation network using a medical imagery dataset.

Audience Level: Intermediate
Prerequisites: Basic knowledge of TensorFlow

Deep Learning for medical image analysis (MXNet)

Convolutional neural networks (CNNs) have proven to be just as effective in visual recognition tasks involving non-visible image types as regular RGB camera imagery. One important application of these capabilities is medical image analysis where we wish to detect features to provide decision support. In addition to processing ionizing and non-ionizing imagery such as CT scans and MRI, these applications also often require processing higher dimensionality imagery that may be volumetric and have a temporal component. In this lab you will use the deep learning framework MXNet to train a CNN to infer the volume of the left ventricle of the human heart from a time-series of volumetric MRI data. You will learn how to extend the canonical 2D CNN to be applied to this more complex data and how to directly predict the ventricle volume rather than generating an image classification. In addition to the standard Python API you will also see how to use MXNet through R which is an important data science platform in the medical research community.

Audience Level: Intermediate
Prerequisites: Basic knowledge of TensorFlow

15:30 - 16:00

Afternoon Break

16:00 - 17:30

Deep Learning Network Deployment

Deep learning software frameworks leverage GPU acceleration to train deep neural networks (DNNs). But what do you do with a DNN once you have trained it? The process of applying a trained DNN to new test data is often referred to as ‘inference’ or ‘deployment’. In this lab you will test three different approaches to deploying a trained DNN for inference. The first approach is to directly use inference functionality within a deep learning framework, in this case DIGITS and Caffe. The second approach is to integrate inference within a custom application by using a deep learning framework API, again using Caffe but this time through it’s Python API. The final approach is to use the NVIDIA GPU Inference Engine (GIE) which will automatically create an optimized inference run-time from a trained Caffe model and network description file. You will learn about the role of batch size in inference performance as well as various optimizations that can be made in the inference process. You’ll also explore inference for a variety of different DNN architectures trained in other DLI labs.

Audience Level: Intermediate
Prerequisites: Getting Started with Deep Learning and Deep Learning for Object Detection

Deep Learning Network Deployment

Deep learning software frameworks leverage GPU acceleration to train deep neural networks (DNNs). But what do you do with a DNN once you have trained it? The process of applying a trained DNN to new test data is often referred to as ‘inference’ or ‘deployment’. In this lab you will test three different approaches to deploying a trained DNN for inference. The first approach is to directly use inference functionality within a deep learning framework, in this case DIGITS and Caffe. The second approach is to integrate inference within a custom application by using a deep learning framework API, again using Caffe but this time through it’s Python API. The final approach is to use the NVIDIA GPU Inference Engine (GIE) which will automatically create an optimized inference run-time from a trained Caffe model and network description file. You will learn about the role of batch size in inference performance as well as various optimizations that can be made in the inference process. You’ll also explore inference for a variety of different DNN architectures trained in other DLI labs.

Audience Level: Intermediate
Prerequisites: Getting Started with Deep Learning and Deep Learning for Object Detection