Software Engineering | Kisaco Research

Software Engineering

Color: 
#92b4ed
Developer Efficiency
Enterprise AI
Data Science
Software Engineering
Systems Engineering
Moderator

Author:

Carlos Guestrin

Professor, Computer Science
Stanford

Carlos Guestrin is a Professor in the Computer Science Department at Stanford University. His previous positions include the Amazon Professor of Machine Learning at the Computer Science & Engineering Department of the University of Washington, the Finmeccanica Associate Professor at Carnegie Mellon University, and the Senior Director of Machine Learning and AI at Apple, after the acquisition of Turi, Inc. (formerly GraphLab and Dato) — Carlos co-founded Turi, which developed a platform for developers and data scientist to build and deploy intelligent applications. He is a technical advisor for OctoML.ai. His team also released a number of popular open-source projects, including XGBoost, LIME, Apache TVM, MXNet, Turi Create, GraphLab/PowerGraph, SFrame, and GraphChi. Carlos received the IJCAI Computers and Thought Award and the Presidential Early Career Award for Scientists and Engineers (PECASE). He is also a recipient of the ONR Young Investigator Award, NSF Career Award, Alfred P. Sloan Fellowship, and IBM Faculty Fellowship, and was named one of the 2008 ‘Brilliant 10’ by Popular Science Magazine. Carlos’ work received awards at a number of conferences and journals, including ACL, AISTATS, ICML, IPSN, JAIR, JWRPM, KDD, NeurIPS, UAI, and VLDB. He is a former member of the Information Sciences and Technology (ISAT) advisory group for DARPA.

Carlos Guestrin

Professor, Computer Science
Stanford

Carlos Guestrin is a Professor in the Computer Science Department at Stanford University. His previous positions include the Amazon Professor of Machine Learning at the Computer Science & Engineering Department of the University of Washington, the Finmeccanica Associate Professor at Carnegie Mellon University, and the Senior Director of Machine Learning and AI at Apple, after the acquisition of Turi, Inc. (formerly GraphLab and Dato) — Carlos co-founded Turi, which developed a platform for developers and data scientist to build and deploy intelligent applications. He is a technical advisor for OctoML.ai. His team also released a number of popular open-source projects, including XGBoost, LIME, Apache TVM, MXNet, Turi Create, GraphLab/PowerGraph, SFrame, and GraphChi. Carlos received the IJCAI Computers and Thought Award and the Presidential Early Career Award for Scientists and Engineers (PECASE). He is also a recipient of the ONR Young Investigator Award, NSF Career Award, Alfred P. Sloan Fellowship, and IBM Faculty Fellowship, and was named one of the 2008 ‘Brilliant 10’ by Popular Science Magazine. Carlos’ work received awards at a number of conferences and journals, including ACL, AISTATS, ICML, IPSN, JAIR, JWRPM, KDD, NeurIPS, UAI, and VLDB. He is a former member of the Information Sciences and Technology (ISAT) advisory group for DARPA.

Author:

Sakyasingha Dasgupta

Founder & CEO
EdgeCortix

Sakya is the founder and Chief Executive officer of EdgeCortix. He is an artificial intelligence (AI) and machine learning technologist, entrepreneur, and engineer with over a decade of experience in taking cutting edge AI research from ideation stage to scalable products, across different industry verticals.  He has lead teams at global companies like Microsoft and IBM Research / IBM Japan, along with national research labs like RIKEN Japan and the Max Planck Institute Germany. Previously, he helped establish and lead the technology division at lean startups in Japan and Singapore, in semiconductor technology, robotics and Fintech sectors. Sakya is the inventor of over 20 patents and has published widely on machine learning and AI with over 1,000 citations. 

Sakya holds a PhD. in Physics of Complex Systems from the Max Planck Institute in Germany, along with Masters in Artificial Intelligence from The University of Edinburgh and a Bachelors of Computer Engineering. Prior to founding EdgeCortix he completed his entrepreneurship studies from the MIT Sloan School of Management.

Sakyasingha Dasgupta

Founder & CEO
EdgeCortix

Sakya is the founder and Chief Executive officer of EdgeCortix. He is an artificial intelligence (AI) and machine learning technologist, entrepreneur, and engineer with over a decade of experience in taking cutting edge AI research from ideation stage to scalable products, across different industry verticals.  He has lead teams at global companies like Microsoft and IBM Research / IBM Japan, along with national research labs like RIKEN Japan and the Max Planck Institute Germany. Previously, he helped establish and lead the technology division at lean startups in Japan and Singapore, in semiconductor technology, robotics and Fintech sectors. Sakya is the inventor of over 20 patents and has published widely on machine learning and AI with over 1,000 citations. 

Sakya holds a PhD. in Physics of Complex Systems from the Max Planck Institute in Germany, along with Masters in Artificial Intelligence from The University of Edinburgh and a Bachelors of Computer Engineering. Prior to founding EdgeCortix he completed his entrepreneurship studies from the MIT Sloan School of Management.

Author:

Luis Ceze

Co-founder and CEO
OctoML

Luis Ceze is Co-founder and CEO at OctoML, Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, and Venture Partner at Madrona Venture Group. His research focuses on the intersection between computer architecture, programming languages, machine learning and biology. His current focus is on approximate computing for efficient machine learning andDNA-based data storage. He co-directs the Molecular Information Systems Lab (MISL), the Systems and Architectures for Machine Learning lab (SAMPL) and the Sampa Lab for HW/SW co-design. He is a recipient of an NSF CAREER Award, a Sloan Research Fellowship, a Microsoft Research Faculty Fellowship, the IEEE TCCA young Computer Architect Award and UIUC Distinguished Alumni Award.

Luis Ceze

Co-founder and CEO
OctoML

Luis Ceze is Co-founder and CEO at OctoML, Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, and Venture Partner at Madrona Venture Group. His research focuses on the intersection between computer architecture, programming languages, machine learning and biology. His current focus is on approximate computing for efficient machine learning andDNA-based data storage. He co-directs the Molecular Information Systems Lab (MISL), the Systems and Architectures for Machine Learning lab (SAMPL) and the Sampa Lab for HW/SW co-design. He is a recipient of an NSF CAREER Award, a Sloan Research Fellowship, a Microsoft Research Faculty Fellowship, the IEEE TCCA young Computer Architect Award and UIUC Distinguished Alumni Award.

Author:

Jian Zhang

Director, Machine Learning
SambaNova Systems

Jian Zhang

Director, Machine Learning
SambaNova Systems

Transformers are in high demand, particularly in industries like BFSI and healthcare, for language processing, understanding, classification, generation and translation. The parameter counts for models like GPT, that are fast becoming the norm in the world of NLP, are mind-boggling, and the cost involved in training and deploying even more so. If the vast potential for LLMs is to extend beyond the wealthiest companies and research institutions on the planet, then there is a need to evaluate how to lower the barriers of entry for experimentation and research on models like GPT. There's also a need to discuss the extent to which bigger is better, in the field of practical and commercial NLP.

This panel will look at the state of play of how enterprises are using large language models today, what their plans are for future research in NLP, and how hardware & systems builders and organizations like HuggingFace can help bring state-of-the-art performance into production in smaller, more resource-constrained enterprises and labs.

Developer Efficiency
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Phil Brown

VP, Scaled Systems Product
Graphcore

Phil leads Graphcore’s efforts to build large scale AI/ML processing capability using Graphcore unique Intelligence Processing Units (IPUs) and IPU-Fabric and Streaming Memory technology. Previously he has held a number of different roles at Graphcore including Director of Applications, leading development of Graphcore’s flagship AL/ML models, and Director of Field Engineering, which acts as the focal point for technical engagements with our customers. Prior to joining Graphcore, Phil worked for Cray Inc. in a number of roles, including leading their engagement with the weather forecasting and climate research customers worldwide and as a technical architect. Phil holds a PhD in Computational Chemistry from the University of Bristol.

Phil Brown

VP, Scaled Systems Product
Graphcore

Phil leads Graphcore’s efforts to build large scale AI/ML processing capability using Graphcore unique Intelligence Processing Units (IPUs) and IPU-Fabric and Streaming Memory technology. Previously he has held a number of different roles at Graphcore including Director of Applications, leading development of Graphcore’s flagship AL/ML models, and Director of Field Engineering, which acts as the focal point for technical engagements with our customers. Prior to joining Graphcore, Phil worked for Cray Inc. in a number of roles, including leading their engagement with the weather forecasting and climate research customers worldwide and as a technical architect. Phil holds a PhD in Computational Chemistry from the University of Bristol.

Author:

Selcuk Kopru

Director, Engineering & Research, Search
eBay

Selcuk Kopru is Head of ML & NLP at eBay and is an experienced AI leader with proven expertise in creating and deploying cutting edge NLP and AI technologies and systems. He is experienced in developing scalable Machine Learning solutions to solve big data problems that involve text and multimodal data. He is also skilled in Python, Java, C++, Machine Translation and Pattern Recognition. Selcuk is also a strong research professional with a Doctor of Philosophy (PhD) in NLP in Computer Science from Middle East Technical University.

Selcuk Kopru

Director, Engineering & Research, Search
eBay

Selcuk Kopru is Head of ML & NLP at eBay and is an experienced AI leader with proven expertise in creating and deploying cutting edge NLP and AI technologies and systems. He is experienced in developing scalable Machine Learning solutions to solve big data problems that involve text and multimodal data. He is also skilled in Python, Java, C++, Machine Translation and Pattern Recognition. Selcuk is also a strong research professional with a Doctor of Philosophy (PhD) in NLP in Computer Science from Middle East Technical University.

Author:

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Author:

Morteza Noshad

Senior ML/NLP Scientist
Vida Health

Morteza Noshad is a senior ML/NLP scientist at Vida health. He is skilled at designing large scale NLP models for different healthcare applications such as automated clinical documentation, symptom detection and question answering. Morteza was a research scientist at Stanford University focusing on graph neural networks for clinical decision support systems where he received the SAGE Scientist Award for his research. Morteza received his Ph.D. in Computer Science from University of Michigan where he contributed to the theory of information bottleneck in deep learning. 

Morteza Noshad

Senior ML/NLP Scientist
Vida Health

Morteza Noshad is a senior ML/NLP scientist at Vida health. He is skilled at designing large scale NLP models for different healthcare applications such as automated clinical documentation, symptom detection and question answering. Morteza was a research scientist at Stanford University focusing on graph neural networks for clinical decision support systems where he received the SAGE Scientist Award for his research. Morteza received his Ph.D. in Computer Science from University of Michigan where he contributed to the theory of information bottleneck in deep learning. 

AI acceleration is a full stack effort and involves a multidisciplinary and holistic approach to design and optimization.

The field of deep learning has gained substantially from co-design concepts across the AI technology stack. The simultaneous design and optimization of hardware and software has led to new algorithms, numerical optimizations, and AI hardware. 

Looking at the AI stack for workloads like computer vision, NLP and Ads, in both a vertical and horizontal sense, there are significant opportunities and challenges for optimization through co-design. This panel will focus on software-defined chips and systems for AI (specs & evaluation, datacenter & edge) and look at the systems-level approach to co-design, including compilers and runtime etc.

Chip Design
Novel AI Hardware
Systems Design
Hardware Engineering
Software Engineering
Systems Engineering

Author:

Nick Ni

Senior Director, Datacenter AI & Compute Markets
AMD

Nick Ni is Senior Director, Data Center AI and Compute Markets at Adaptive Embedded Computing Group (AECG) at AMD, responsible for the P&L of the fast-growing Data Center AI and compute segment. His team is responsible for product marketing and product management including AI product planning, go-to-market, business development and solution architecture.

Nick Ni

Senior Director, Datacenter AI & Compute Markets
AMD

Nick Ni is Senior Director, Data Center AI and Compute Markets at Adaptive Embedded Computing Group (AECG) at AMD, responsible for the P&L of the fast-growing Data Center AI and compute segment. His team is responsible for product marketing and product management including AI product planning, go-to-market, business development and solution architecture.

Author:

Xiaoyong Liu

Director, AI Platform
Alibaba

Xiaoyong Liu

Director, AI Platform
Alibaba

Author:

Shubho Sengupta

Software Engineer
Meta

Shubho Sengupta is a Software Engineer at Meta, where he designs Meta’s Research Infra for AI training. He started working on AI in 2014, on speech related AI models like DeepSpeech and DeepVoice. Before that he pioneered many of the foundational algorithms in general purpose programming in GPUs, which has won Test of Time award. These days, he also works at the intersection of cryptography and computation, specifically in bi-partite and multi-partite matching algorithms.

Shubho Sengupta

Software Engineer
Meta

Shubho Sengupta is a Software Engineer at Meta, where he designs Meta’s Research Infra for AI training. He started working on AI in 2014, on speech related AI models like DeepSpeech and DeepVoice. Before that he pioneered many of the foundational algorithms in general purpose programming in GPUs, which has won Test of Time award. These days, he also works at the intersection of cryptography and computation, specifically in bi-partite and multi-partite matching algorithms.

Author:

Dr. Charles Fan

CEO and Co-Founder
MemVerge

Charles Fan is CEO and co-founder of MemVerge. Prior to MemVerge, Charles was the CTO of Cheetah Mobile leading its global technology teams, and an SVP/GM at VMware, founding the storage business unit that developed the Virtual SAN product. Charles also worked at EMC and was the founder of the EMC China R&D Center. Charles joined EMC via the acquisition of Rainfinity, where he was a co-founder and CTO. Charles received his Ph.D. and M.S. in Electrical Engineering from the California Institute of Technology, and his B.E. in Electrical Engineering from the Cooper Union.

Dr. Charles Fan

CEO and Co-Founder
MemVerge

Charles Fan is CEO and co-founder of MemVerge. Prior to MemVerge, Charles was the CTO of Cheetah Mobile leading its global technology teams, and an SVP/GM at VMware, founding the storage business unit that developed the Virtual SAN product. Charles also worked at EMC and was the founder of the EMC China R&D Center. Charles joined EMC via the acquisition of Rainfinity, where he was a co-founder and CTO. Charles received his Ph.D. and M.S. in Electrical Engineering from the California Institute of Technology, and his B.E. in Electrical Engineering from the Cooper Union.

Author:

Zaid Kahn

VP, Cloud AI & Advanced Systems Engineering
Microsoft

Zaid is currently a VP in Microsoft’s Silicon, Cloud Hardware, and Infrastructure Engineering organization where he leads systems engineering and hardware development for Azure including AI systems and infrastructure. Zaid is part of the technical leadership team across Microsoft that sets AI hardware strategy for training and inference. Zaid's teams are also responsible for software and hardware engineering efforts developing specialized compute systems, FPGA network products and ASIC hardware accelerators.

 

Prior to Microsoft Zaid was head of infrastructure at LinkedIn where he was responsible for all aspects of architecture and engineering for Datacenters, Networking, Compute, Storage and Hardware. Zaid also led several software development teams focusing on building and managing infrastructure as code. This included zero touch provisioning, software-defined networking, network operating systems (SONiC, OpenSwitch), self-healing networks, backbone controller, software defined storage and distributed host-based firewalls. The network teams Zaid led built the global network for LinkedIn, including POP's, peering for edge services, IPv6 implementation, DWDM infrastructure and datacenter network fabric. The hardware and datacenter engineering teams Zaid led were responsible for water cooling to the racks, optical fiber infrastructure and open hardware development which was contributed to the Open Compute Project Foundation (OCP).

 

Zaid holds several patents in networking and is a sought-after keynote speaker at top tier conferences and events. Zaid is currently the chairperson for the OCP Foundation Board. He is also currently on the EECS External Advisory Board (EAB) at UC Berkeley and a board member of Internet Ecosystem Innovation Committee (IEIC), a global internet think tank promoting internet diversity. Zaid has a Bachelor of Science in Computer Science and Physics from the University of the South Pacific.

Zaid Kahn

VP, Cloud AI & Advanced Systems Engineering
Microsoft

Zaid is currently a VP in Microsoft’s Silicon, Cloud Hardware, and Infrastructure Engineering organization where he leads systems engineering and hardware development for Azure including AI systems and infrastructure. Zaid is part of the technical leadership team across Microsoft that sets AI hardware strategy for training and inference. Zaid's teams are also responsible for software and hardware engineering efforts developing specialized compute systems, FPGA network products and ASIC hardware accelerators.

 

Prior to Microsoft Zaid was head of infrastructure at LinkedIn where he was responsible for all aspects of architecture and engineering for Datacenters, Networking, Compute, Storage and Hardware. Zaid also led several software development teams focusing on building and managing infrastructure as code. This included zero touch provisioning, software-defined networking, network operating systems (SONiC, OpenSwitch), self-healing networks, backbone controller, software defined storage and distributed host-based firewalls. The network teams Zaid led built the global network for LinkedIn, including POP's, peering for edge services, IPv6 implementation, DWDM infrastructure and datacenter network fabric. The hardware and datacenter engineering teams Zaid led were responsible for water cooling to the racks, optical fiber infrastructure and open hardware development which was contributed to the Open Compute Project Foundation (OCP).

 

Zaid holds several patents in networking and is a sought-after keynote speaker at top tier conferences and events. Zaid is currently the chairperson for the OCP Foundation Board. He is also currently on the EECS External Advisory Board (EAB) at UC Berkeley and a board member of Internet Ecosystem Innovation Committee (IEIC), a global internet think tank promoting internet diversity. Zaid has a Bachelor of Science in Computer Science and Physics from the University of the South Pacific.

Cerebras Systems builds the fastest AI accelerators in the industry. In this talk we will review how the size and scope of massive natural language processing (NLP) presents fundamental challenges to legacy compute and to traditional cloud providers. We will explore the importance of guaranteed node to node latency in large clusters, how that can’t be achieved in the cloud, and how it prevents linear and even deterministic scaling. We will examine the complexity of distributing NLP models over hundreds or thousands of GPUs and show how quickly and easily a cluster of Cerebras CS-2s is set up, and how linear scaling can be achieved over millions of compute cores with Cerebras technology. And finally, we will show how innovative customers are using clusters of Cerebras CS-2s to train large language models in order to solve both basic and applied scientific challenges, including understanding the COVID-19 replication mechanism, epigenetic language modelling for drug discovery, and in the development of clean energy. This enables researchers to test ideas that may otherwise languish for lack of resources and, ultimately, reduces the cost of curiosity.

Chip Design
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Andy Hock

VP, Product Management
Cerebras

Dr. Andy Hock is VP of Product Management at Cerebras Systems with responsibility for product strategy. His organization drives engagement with engineering and our customers to inform the hardware, software, and machine learning technical requirements and accelerate world-leading AI with Cerebras’ products. Prior to Cerebras, Andy has held senior leadership positions with Arete Associates, Skybox Imaging (acquired by Google), and Google. He holds a PhD in Geophysics and Space Physics from UCLA.

Andy Hock

VP, Product Management
Cerebras

Dr. Andy Hock is VP of Product Management at Cerebras Systems with responsibility for product strategy. His organization drives engagement with engineering and our customers to inform the hardware, software, and machine learning technical requirements and accelerate world-leading AI with Cerebras’ products. Prior to Cerebras, Andy has held senior leadership positions with Arete Associates, Skybox Imaging (acquired by Google), and Google. He holds a PhD in Geophysics and Space Physics from UCLA.

In this keynote, Dr. Cédric Bourrasset, AI Distinguished Expert at Atos, will reveal how Atos pioneered the successful architecture, build, and delivery of large-scale AI infrastructures. He will present a live demonstration of Atos-driven technology to illustrate new AI-driven endpoints featuring GPU and IPU workflow capabilities, featuring a global customer case study to elaborate on the current complex challenges faced by designing and manufacturing large-scale AI computing platforms. He will also leverage over 15 years of personal experience in designing and manufacturing supercomputing systems.

Developer Efficiency
Edge AI
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Cedric Bourrasset

Head, High Performance AI Business Unit
Atos

Dr. Cedric Bourrasset is AI Business Leader for High Performance Computing Business Unit at Atos. He is also AI product manager for the Atos Codex AI suite, software enabling AI workloads into HPC environments as well as integrating a computer vision solution. He joined Atos in 2016 as an expert in the HPC/AI domain.

Previously, Cedric received his Ph.D. in Electronics and computer vision from the Blaise Pascal University of Clermont-Ferrand defending the dataflow model of computation for FPGA High Level Synthesis problematic in embedded machine learning applications.

Cedric Bourrasset

Head, High Performance AI Business Unit
Atos

Dr. Cedric Bourrasset is AI Business Leader for High Performance Computing Business Unit at Atos. He is also AI product manager for the Atos Codex AI suite, software enabling AI workloads into HPC environments as well as integrating a computer vision solution. He joined Atos in 2016 as an expert in the HPC/AI domain.

Previously, Cedric received his Ph.D. in Electronics and computer vision from the Blaise Pascal University of Clermont-Ferrand defending the dataflow model of computation for FPGA High Level Synthesis problematic in embedded machine learning applications.

Chip Design
Developer Efficiency
Edge AI
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Gordon Wilson

Co-Founder & CEO
Rain Neuromorphics

Gordon Wilson

Co-Founder & CEO
Rain Neuromorphics

The true potential of AI rests on super-human learning capacity, and on the ability to selectively draw on that learning. Both of these properties – scale and selectivity – challenge the design of AI computers and the tools used to program them. A rich pool of new ideas is emerging, driven by a new breed of computing company, according to Graphcore co-founder Simon Knowles. At the AI Hardware Summit, Phil Brown, VP Scaled Systems Product discusses the creation of the Intelligence Processing Unit (IPU) – a new type of processor, specifically designed for AI computation. He looks ahead, towards the development of AIs with super-human cognition, and explores the nature of computation systems needed to make powerful AI an economic everyday reality.

Developer Efficiency
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Phil Brown

VP, Scaled Systems Product
Graphcore

Phil leads Graphcore’s efforts to build large scale AI/ML processing capability using Graphcore unique Intelligence Processing Units (IPUs) and IPU-Fabric and Streaming Memory technology. Previously he has held a number of different roles at Graphcore including Director of Applications, leading development of Graphcore’s flagship AL/ML models, and Director of Field Engineering, which acts as the focal point for technical engagements with our customers. Prior to joining Graphcore, Phil worked for Cray Inc. in a number of roles, including leading their engagement with the weather forecasting and climate research customers worldwide and as a technical architect. Phil holds a PhD in Computational Chemistry from the University of Bristol.

Phil Brown

VP, Scaled Systems Product
Graphcore

Phil leads Graphcore’s efforts to build large scale AI/ML processing capability using Graphcore unique Intelligence Processing Units (IPUs) and IPU-Fabric and Streaming Memory technology. Previously he has held a number of different roles at Graphcore including Director of Applications, leading development of Graphcore’s flagship AL/ML models, and Director of Field Engineering, which acts as the focal point for technical engagements with our customers. Prior to joining Graphcore, Phil worked for Cray Inc. in a number of roles, including leading their engagement with the weather forecasting and climate research customers worldwide and as a technical architect. Phil holds a PhD in Computational Chemistry from the University of Bristol.

Developer Efficiency
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Kunle Olukotun

Chief Technologist & Co-Founder
SambaNova Systems

Kunle Olukotun is the Cadence Design Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is a renowned pioneer in multi-core processor design and the leader of the Stanford Hydra chip multiprocessor (CMP) research project.

Prior to SambaNova Systems, Olukotun founded Afara Websystems to develop high-throughput, low-power multi-core processors for server systems. The Afara multi-core processor, called Niagara, was acquired by Sun Microsystems and now powers Oracle’s SPARC-based servers.

Olukotun is the Director of the Pervasive Parallel Lab and a member of the Data Analytics for What’s Next (DAWN) Lab, developing infrastructure for usable machine learning.

Olukotun is an ACM Fellow and IEEE Fellow for contributions to multiprocessors on a chip and multi-threaded processor design. Olukotun recently won the prestigious IEEE Computer Society’s Harry H. Goode Memorial Award and was also elected to the National Academy of Engineering—one of the highest professional distinctions accorded to an engineer.

Kunle received his Ph.D. in Computer Engineering from The University of Michigan.

Kunle Olukotun

Chief Technologist & Co-Founder
SambaNova Systems

Kunle Olukotun is the Cadence Design Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is a renowned pioneer in multi-core processor design and the leader of the Stanford Hydra chip multiprocessor (CMP) research project.

Prior to SambaNova Systems, Olukotun founded Afara Websystems to develop high-throughput, low-power multi-core processors for server systems. The Afara multi-core processor, called Niagara, was acquired by Sun Microsystems and now powers Oracle’s SPARC-based servers.

Olukotun is the Director of the Pervasive Parallel Lab and a member of the Data Analytics for What’s Next (DAWN) Lab, developing infrastructure for usable machine learning.

Olukotun is an ACM Fellow and IEEE Fellow for contributions to multiprocessors on a chip and multi-threaded processor design. Olukotun recently won the prestigious IEEE Computer Society’s Harry H. Goode Memorial Award and was also elected to the National Academy of Engineering—one of the highest professional distinctions accorded to an engineer.

Kunle received his Ph.D. in Computer Engineering from The University of Michigan.

Author:

Rodrigo Liang

Co-Founder & CEO
SambaNova Systems

Rodrigo is CEO and co-founder of SambaNova Systems. Prior to joining SambaNova, Rodrigo was responsible for SPARC Processor and ASIC Development at Oracle. He led the engineering organization responsible for the design of state-of-the-art processors and ASIC's for Oracle's enterprise servers.

Rodrigo Liang

Co-Founder & CEO
SambaNova Systems

Rodrigo is CEO and co-founder of SambaNova Systems. Prior to joining SambaNova, Rodrigo was responsible for SPARC Processor and ASIC Development at Oracle. He led the engineering organization responsible for the design of state-of-the-art processors and ASIC's for Oracle's enterprise servers.

Cerebras Systems builds the fastest AI accelerators in the industry. In this talk we will review how the size and scope of massive natural language processing (NLP) presents fundamental challenges to legacy compute and to traditional cloud providers. We will explore the importance of guaranteed node to node latency in large clusters, how that can’t be achieved in the cloud, and how it prevents linear and even deterministic scaling. We will examine the complexity of distributing NLP models over hundreds or thousands of GPUs and show how quickly and easily a cluster of Cerebras CS-2s is set up, and how linear scaling can be achieved over millions of compute cores with Cerebras technology. And finally, we will show how innovative customers are using clusters of Cerebras CS-2s to train large language models in order to solve both basic and applied scientific challenges, including understanding the COVID-19 replication mechanism, epigenetic language modelling for drug discovery, and in the development of clean energy. This enables researchers to test ideas that may otherwise languish for lack of resources and, ultimately, reduces the cost of curiosity.  ​

 

Chip Design
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Andy Hock

VP, Product Management
Cerebras

Dr. Andy Hock is VP of Product Management at Cerebras Systems with responsibility for product strategy. His organization drives engagement with engineering and our customers to inform the hardware, software, and machine learning technical requirements and accelerate world-leading AI with Cerebras’ products. Prior to Cerebras, Andy has held senior leadership positions with Arete Associates, Skybox Imaging (acquired by Google), and Google. He holds a PhD in Geophysics and Space Physics from UCLA.

Andy Hock

VP, Product Management
Cerebras

Dr. Andy Hock is VP of Product Management at Cerebras Systems with responsibility for product strategy. His organization drives engagement with engineering and our customers to inform the hardware, software, and machine learning technical requirements and accelerate world-leading AI with Cerebras’ products. Prior to Cerebras, Andy has held senior leadership positions with Arete Associates, Skybox Imaging (acquired by Google), and Google. He holds a PhD in Geophysics and Space Physics from UCLA.

Chip Design
Edge AI
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering
Industry & Investment

Author:

Lip-Bu Tan

Founder & Chairman
Walden International

Lip-Bu Tan is Founder and Chairman of Walden International (“WI”), and Founding Managing Partner of Celesta Capital and Walden Catalyst Ventures, with over $5 billion under management.  He formerly served as Chief Executive Officer and Executive Chairman of Cadence Design Systems, Inc.  He currently serves on the Board of Schneider Electric SE (SU: FP), Intel Corporation (NASDAQ: INTC), and Credo Semiconductor (NASDAQ: CRDO).

 

Lip-Bu focuses on semiconductor/components, cloud/edge infrastructure, data management and security, and AI/machine learning.Lip-Bu received his B.S. from Nanyang University in Singapore, his M.S. in Nuclear Engineering from the Massachusetts Institute of Technology, and his MBA from the University of San Francisco. He also received his honorary degree for Doctor of Humane Letters from the University of San Francisco.  Lip-Bu currently serves on Carnegie Mellon University (CMU)’s Board of Trustees and the School of Engineering Dean’s Council, Massachusetts Institute of Technology (MIT)’s School of Engineering Dean’s Advisory Council, University of California Berkeley (UCB)’s College of Engineering Advisory Board and their Computing, Data Science, and Society Advisory Board, and University of California San Francisco (UCSF)’s Executive Council. He’s also a member of the Global Advisory Board of METI Japan, The Business Council, and Committee 100. He also served on the board of the Board of Global Semiconductor Alliance (GSA) from 2009 to 2021, and as a Trustee of Nanyang Technological University (NTU) in Singapore from 2006 to 2011.  Lip-Bu has been named one of the Top 10 Venture Capitalists in China by Zero2ipo and was listed as one of the Top 50 Venture Capitalists on the Forbes Midas List. He’s the recipient of imec’s 2023 Lifetime of Innovation Award, the Semiconductor Industry Association (SIA) 2022 Robert N. Noyce Award, and GSA’s 2016 Dr. Morris Chang's Exemplary Leadership Award.  In 2017, he was ranked #1 of the most well-connected executives in the technology industry by the analytics firm Relationship Science. 

Lip-Bu Tan

Founder & Chairman
Walden International

Lip-Bu Tan is Founder and Chairman of Walden International (“WI”), and Founding Managing Partner of Celesta Capital and Walden Catalyst Ventures, with over $5 billion under management.  He formerly served as Chief Executive Officer and Executive Chairman of Cadence Design Systems, Inc.  He currently serves on the Board of Schneider Electric SE (SU: FP), Intel Corporation (NASDAQ: INTC), and Credo Semiconductor (NASDAQ: CRDO).

 

Lip-Bu focuses on semiconductor/components, cloud/edge infrastructure, data management and security, and AI/machine learning.Lip-Bu received his B.S. from Nanyang University in Singapore, his M.S. in Nuclear Engineering from the Massachusetts Institute of Technology, and his MBA from the University of San Francisco. He also received his honorary degree for Doctor of Humane Letters from the University of San Francisco.  Lip-Bu currently serves on Carnegie Mellon University (CMU)’s Board of Trustees and the School of Engineering Dean’s Council, Massachusetts Institute of Technology (MIT)’s School of Engineering Dean’s Advisory Council, University of California Berkeley (UCB)’s College of Engineering Advisory Board and their Computing, Data Science, and Society Advisory Board, and University of California San Francisco (UCSF)’s Executive Council. He’s also a member of the Global Advisory Board of METI Japan, The Business Council, and Committee 100. He also served on the board of the Board of Global Semiconductor Alliance (GSA) from 2009 to 2021, and as a Trustee of Nanyang Technological University (NTU) in Singapore from 2006 to 2011.  Lip-Bu has been named one of the Top 10 Venture Capitalists in China by Zero2ipo and was listed as one of the Top 50 Venture Capitalists on the Forbes Midas List. He’s the recipient of imec’s 2023 Lifetime of Innovation Award, the Semiconductor Industry Association (SIA) 2022 Robert N. Noyce Award, and GSA’s 2016 Dr. Morris Chang's Exemplary Leadership Award.  In 2017, he was ranked #1 of the most well-connected executives in the technology industry by the analytics firm Relationship Science.