Hardware Engineering | Page 2 | Kisaco Research

Hardware Engineering

Color: 
#1d3459
Developer Efficiency
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Kunle Olukotun

Chief Technologist & Co-Founder
SambaNova Systems

Kunle Olukotun is the Cadence Design Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is a renowned pioneer in multi-core processor design and the leader of the Stanford Hydra chip multiprocessor (CMP) research project.

Prior to SambaNova Systems, Olukotun founded Afara Websystems to develop high-throughput, low-power multi-core processors for server systems. The Afara multi-core processor, called Niagara, was acquired by Sun Microsystems and now powers Oracle’s SPARC-based servers.

Olukotun is the Director of the Pervasive Parallel Lab and a member of the Data Analytics for What’s Next (DAWN) Lab, developing infrastructure for usable machine learning.

Olukotun is an ACM Fellow and IEEE Fellow for contributions to multiprocessors on a chip and multi-threaded processor design. Olukotun recently won the prestigious IEEE Computer Society’s Harry H. Goode Memorial Award and was also elected to the National Academy of Engineering—one of the highest professional distinctions accorded to an engineer.

Kunle received his Ph.D. in Computer Engineering from The University of Michigan.

Kunle Olukotun

Chief Technologist & Co-Founder
SambaNova Systems

Kunle Olukotun is the Cadence Design Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is a renowned pioneer in multi-core processor design and the leader of the Stanford Hydra chip multiprocessor (CMP) research project.

Prior to SambaNova Systems, Olukotun founded Afara Websystems to develop high-throughput, low-power multi-core processors for server systems. The Afara multi-core processor, called Niagara, was acquired by Sun Microsystems and now powers Oracle’s SPARC-based servers.

Olukotun is the Director of the Pervasive Parallel Lab and a member of the Data Analytics for What’s Next (DAWN) Lab, developing infrastructure for usable machine learning.

Olukotun is an ACM Fellow and IEEE Fellow for contributions to multiprocessors on a chip and multi-threaded processor design. Olukotun recently won the prestigious IEEE Computer Society’s Harry H. Goode Memorial Award and was also elected to the National Academy of Engineering—one of the highest professional distinctions accorded to an engineer.

Kunle received his Ph.D. in Computer Engineering from The University of Michigan.

Author:

Rodrigo Liang

Co-Founder & CEO
SambaNova Systems

Rodrigo is CEO and co-founder of SambaNova Systems. Prior to joining SambaNova, Rodrigo was responsible for SPARC Processor and ASIC Development at Oracle. He led the engineering organization responsible for the design of state-of-the-art processors and ASIC's for Oracle's enterprise servers.

Rodrigo Liang

Co-Founder & CEO
SambaNova Systems

Rodrigo is CEO and co-founder of SambaNova Systems. Prior to joining SambaNova, Rodrigo was responsible for SPARC Processor and ASIC Development at Oracle. He led the engineering organization responsible for the design of state-of-the-art processors and ASIC's for Oracle's enterprise servers.

Cerebras Systems builds the fastest AI accelerators in the industry. In this talk we will review how the size and scope of massive natural language processing (NLP) presents fundamental challenges to legacy compute and to traditional cloud providers. We will explore the importance of guaranteed node to node latency in large clusters, how that can’t be achieved in the cloud, and how it prevents linear and even deterministic scaling. We will examine the complexity of distributing NLP models over hundreds or thousands of GPUs and show how quickly and easily a cluster of Cerebras CS-2s is set up, and how linear scaling can be achieved over millions of compute cores with Cerebras technology. And finally, we will show how innovative customers are using clusters of Cerebras CS-2s to train large language models in order to solve both basic and applied scientific challenges, including understanding the COVID-19 replication mechanism, epigenetic language modelling for drug discovery, and in the development of clean energy. This enables researchers to test ideas that may otherwise languish for lack of resources and, ultimately, reduces the cost of curiosity.  ​

 

Chip Design
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Andy Hock

VP, Product Management
Cerebras

Dr. Andy Hock is VP of Product Management at Cerebras Systems with responsibility for product strategy. His organization drives engagement with engineering and our customers to inform the hardware, software, and machine learning technical requirements and accelerate world-leading AI with Cerebras’ products. Prior to Cerebras, Andy has held senior leadership positions with Arete Associates, Skybox Imaging (acquired by Google), and Google. He holds a PhD in Geophysics and Space Physics from UCLA.

Andy Hock

VP, Product Management
Cerebras

Dr. Andy Hock is VP of Product Management at Cerebras Systems with responsibility for product strategy. His organization drives engagement with engineering and our customers to inform the hardware, software, and machine learning technical requirements and accelerate world-leading AI with Cerebras’ products. Prior to Cerebras, Andy has held senior leadership positions with Arete Associates, Skybox Imaging (acquired by Google), and Google. He holds a PhD in Geophysics and Space Physics from UCLA.

Chip Design
Edge AI
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering
Industry & Investment

Author:

Lip-Bu Tan

Founder & Chairman
Walden International

Lip-Bu Tan is Founder and Chairman of Walden International (“WI”), and Founding Managing Partner of Celesta Capital and Walden Catalyst Ventures, with over $5 billion under management.  He formerly served as Chief Executive Officer and Executive Chairman of Cadence Design Systems, Inc.  He currently serves on the Board of Schneider Electric SE (SU: FP), Intel Corporation (NASDAQ: INTC), and Credo Semiconductor (NASDAQ: CRDO).

 

Lip-Bu focuses on semiconductor/components, cloud/edge infrastructure, data management and security, and AI/machine learning.Lip-Bu received his B.S. from Nanyang University in Singapore, his M.S. in Nuclear Engineering from the Massachusetts Institute of Technology, and his MBA from the University of San Francisco. He also received his honorary degree for Doctor of Humane Letters from the University of San Francisco.  Lip-Bu currently serves on Carnegie Mellon University (CMU)’s Board of Trustees and the School of Engineering Dean’s Council, Massachusetts Institute of Technology (MIT)’s School of Engineering Dean’s Advisory Council, University of California Berkeley (UCB)’s College of Engineering Advisory Board and their Computing, Data Science, and Society Advisory Board, and University of California San Francisco (UCSF)’s Executive Council. He’s also a member of the Global Advisory Board of METI Japan, The Business Council, and Committee 100. He also served on the board of the Board of Global Semiconductor Alliance (GSA) from 2009 to 2021, and as a Trustee of Nanyang Technological University (NTU) in Singapore from 2006 to 2011.  Lip-Bu has been named one of the Top 10 Venture Capitalists in China by Zero2ipo and was listed as one of the Top 50 Venture Capitalists on the Forbes Midas List. He’s the recipient of imec’s 2023 Lifetime of Innovation Award, the Semiconductor Industry Association (SIA) 2022 Robert N. Noyce Award, and GSA’s 2016 Dr. Morris Chang's Exemplary Leadership Award.  In 2017, he was ranked #1 of the most well-connected executives in the technology industry by the analytics firm Relationship Science. 

Lip-Bu Tan

Founder & Chairman
Walden International

Lip-Bu Tan is Founder and Chairman of Walden International (“WI”), and Founding Managing Partner of Celesta Capital and Walden Catalyst Ventures, with over $5 billion under management.  He formerly served as Chief Executive Officer and Executive Chairman of Cadence Design Systems, Inc.  He currently serves on the Board of Schneider Electric SE (SU: FP), Intel Corporation (NASDAQ: INTC), and Credo Semiconductor (NASDAQ: CRDO).

 

Lip-Bu focuses on semiconductor/components, cloud/edge infrastructure, data management and security, and AI/machine learning.Lip-Bu received his B.S. from Nanyang University in Singapore, his M.S. in Nuclear Engineering from the Massachusetts Institute of Technology, and his MBA from the University of San Francisco. He also received his honorary degree for Doctor of Humane Letters from the University of San Francisco.  Lip-Bu currently serves on Carnegie Mellon University (CMU)’s Board of Trustees and the School of Engineering Dean’s Council, Massachusetts Institute of Technology (MIT)’s School of Engineering Dean’s Advisory Council, University of California Berkeley (UCB)’s College of Engineering Advisory Board and their Computing, Data Science, and Society Advisory Board, and University of California San Francisco (UCSF)’s Executive Council. He’s also a member of the Global Advisory Board of METI Japan, The Business Council, and Committee 100. He also served on the board of the Board of Global Semiconductor Alliance (GSA) from 2009 to 2021, and as a Trustee of Nanyang Technological University (NTU) in Singapore from 2006 to 2011.  Lip-Bu has been named one of the Top 10 Venture Capitalists in China by Zero2ipo and was listed as one of the Top 50 Venture Capitalists on the Forbes Midas List. He’s the recipient of imec’s 2023 Lifetime of Innovation Award, the Semiconductor Industry Association (SIA) 2022 Robert N. Noyce Award, and GSA’s 2016 Dr. Morris Chang's Exemplary Leadership Award.  In 2017, he was ranked #1 of the most well-connected executives in the technology industry by the analytics firm Relationship Science. 

Developer Efficiency
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Alexis Black Bjorlin

VP/GM, DGX Cloud
NVIDIA

Dr. Alexis Black Bjorlin was previously VP, Infrastructure Hardware Engineering at Meta. She also serves on the board of directors at Digital Realty and Celestial AI. Prior to Meta, Dr. Bjorlin was Senior Vice President and General Manager of Broadcom’s Optical Systems Division and previously Corporate Vice President of the Data Center Group and General Manager of the Connectivity Group at Intel. Prior to Intel, she spent eight years as President of Source Photonics, where she also served on the board of directors. She earned a B.S. in Materials Science and Engineering from Massachusetts Institute of Technology and a Ph.D. in Materials Science from the University of California at Santa Barbara.

Alexis Black Bjorlin

VP/GM, DGX Cloud
NVIDIA

Dr. Alexis Black Bjorlin was previously VP, Infrastructure Hardware Engineering at Meta. She also serves on the board of directors at Digital Realty and Celestial AI. Prior to Meta, Dr. Bjorlin was Senior Vice President and General Manager of Broadcom’s Optical Systems Division and previously Corporate Vice President of the Data Center Group and General Manager of the Connectivity Group at Intel. Prior to Intel, she spent eight years as President of Source Photonics, where she also served on the board of directors. She earned a B.S. in Materials Science and Engineering from Massachusetts Institute of Technology and a Ph.D. in Materials Science from the University of California at Santa Barbara.

AI Hardware Summit attendees are invited to attend the an extended networking session where they can meet attendees from across both events. The Meet & Greet is a perfect opportunity to reconnect with peers, expand your network, and discuss the state of ML across the cloud-edge continuum!

Chip Design
Developer Efficiency
Edge AI
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Colin Murdoch

Chief Business Officer
DeepMind

Decades of international commercial experience and deep technical expertise mean Colin is uniquely placed to ensure DeepMind’s cutting-edge research benefits as many people as possible. As Chief Business Officer of DeepMind, he oversees a wide-range of teams including Applied, which applies research breakthroughs to Google products and infrastructure used by billions of people. He also helps drive the growth of DeepMind, building and leading critical functions including finance and strategy and leading external and commercial partnerships. Originally an electronics and software engineer, he has held senior positions at both start-ups and global companies such as Thomson Reuters, helping them solve their own complex, mission-critical, real-world challenges.

Colin Murdoch

Chief Business Officer
DeepMind

Decades of international commercial experience and deep technical expertise mean Colin is uniquely placed to ensure DeepMind’s cutting-edge research benefits as many people as possible. As Chief Business Officer of DeepMind, he oversees a wide-range of teams including Applied, which applies research breakthroughs to Google products and infrastructure used by billions of people. He also helps drive the growth of DeepMind, building and leading critical functions including finance and strategy and leading external and commercial partnerships. Originally an electronics and software engineer, he has held senior positions at both start-ups and global companies such as Thomson Reuters, helping them solve their own complex, mission-critical, real-world challenges.

Author:

Cade Metz

Technology Correspondent
New York Times

Cade Metz is a reporter with The New York Times, covering artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas. Genius Makers is his first book. Previously, he was a senior staff writer with Wired magazine and the U.S. editor of The Register, one of Britain’s leading science and technology news sites.

A native of North Carolina and a graduate of Duke University, Metz, 48, works in The New York Times’ San Francisco bureau and lives across the bay with his wife Taylor and two daughters.

Cade Metz

Technology Correspondent
New York Times

Cade Metz is a reporter with The New York Times, covering artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas. Genius Makers is his first book. Previously, he was a senior staff writer with Wired magazine and the U.S. editor of The Register, one of Britain’s leading science and technology news sites.

A native of North Carolina and a graduate of Duke University, Metz, 48, works in The New York Times’ San Francisco bureau and lives across the bay with his wife Taylor and two daughters.

Graphcore's Intelligence Processing Unit (IPU), built on its unique wafer-on-wafer technology architecture, enables innovators across all industries to undertake breakthrough research with the power of AI compute. To deliver what Graphcore believes will be the standard for machine intelligence compute, it follows a continuous integration (CI) and continuous delivery (CD) process to ensure incremental code changes are delivered quickly and reliably to production. In this workshop, Graphcore will share how it’s using Synopsys formal verification solutions throughout the CI/CD process to deliver bug-free silicon.  Workshop topics include:

  • An introduction to Sequential Equivalence Checking (SEQ) and Formal Testbench Analyzer (FTA) applications, part of Synopsys VC Formal
  • Graphcore’s formal verification deployment to maximize engineering productivity
  • How formal is modified for CI and CD
  • Strategies Graphcore employed to overcome reproducibility challenges at the CI stage

 

Hardware Engineer workshop are restricted to hardware engineers and architects, and software designers from companies interested in learning how to design and deploy ML onto hardware platforms.

Workshops are application only and subject to eligibility and availability. The workshops are free, and lunch, shared networking sessions, and access to the Meet and Greet function and keynote is included in the developer pass. If you're a hardware or software engineer, please apply using the form in the registration section of the website or by emailing [email protected]. There are approximately 30 spaces available.

Chip Design
Novel AI Hardware
Systems Design
Hardware Engineering
Systems Engineering

Author:

Manish Pandey

Fellow & VP, R&D
Synopsys

Manish Pandey is Vice President R&D and Fellow at Synopsys, and an Adjunct Professor at Carnegie Mellon University. He completed his PhD in Computer Science from Carnegie Mellon University and a B. Tech. in Computer Science from the Indian Institute of Technology Kharagpur. He currently leads the R&D teams for formal and static technologies, and machine learning at Synopsys. He previously led the development of several static and formal verification technologies at Verplex and Cadence which are in widespread use in the industry. Manish has been the recipient of the IEEE Transaction in CAD Outstanding Young author award and holds over two dozen patents and refereed publications.

Manish Pandey

Fellow & VP, R&D
Synopsys

Manish Pandey is Vice President R&D and Fellow at Synopsys, and an Adjunct Professor at Carnegie Mellon University. He completed his PhD in Computer Science from Carnegie Mellon University and a B. Tech. in Computer Science from the Indian Institute of Technology Kharagpur. He currently leads the R&D teams for formal and static technologies, and machine learning at Synopsys. He previously led the development of several static and formal verification technologies at Verplex and Cadence which are in widespread use in the industry. Manish has been the recipient of the IEEE Transaction in CAD Outstanding Young author award and holds over two dozen patents and refereed publications.

Author:

Anthony Wood

Formal Verification Lead
Graphcore

Anthony has held a number of a design and verification roles working on CPUs, GPUs and IPUs at Infineon, Xmos, Imagination and Graphcore.  At Imagination he was head of verification for the high-end GPU cores.  At Graphcore his responsibilities include leveraging formal verification techniques throughout the silicon team; reducing delivery timescales through Continuous Delivery; and working to ensure verification disciplines are adopted for DFT.  He strives to find ways of improving the productivity of silicon engineers and of course he agonises about potential verification holes.

Anthony Wood

Formal Verification Lead
Graphcore

Anthony has held a number of a design and verification roles working on CPUs, GPUs and IPUs at Infineon, Xmos, Imagination and Graphcore.  At Imagination he was head of verification for the high-end GPU cores.  At Graphcore his responsibilities include leveraging formal verification techniques throughout the silicon team; reducing delivery timescales through Continuous Delivery; and working to ensure verification disciplines are adopted for DFT.  He strives to find ways of improving the productivity of silicon engineers and of course he agonises about potential verification holes.

Deploying a neural network on an embedded solution requires more than compiling a trained model. Join us to discuss the IP and tooling available from Cadence that allow architects to start with a neural network model, run through quantization and partitioning mapping to a configurable embedded target, simulating the design to get performance data (both cycle and energy), and iterating through design optimizations to reach an optimal implementation. Our experts will give a technical walkthrough of the tools, features, supported frameworks, and infrastructure available to both software and silicon designers.

Hardware Engineer workshop are restricted to hardware engineers and architects, and software designers from companies interested in learning how to design and deploy ML onto hardware platforms.

Workshops are application only and subject to eligibility and availability. The workshops are free, and lunch, shared networking sessions, and access to the Meet and Greet function and keynote is included in the developer pass. If you're a hardware or software engineer, please apply using the form in the registration section of the website or by emailing [email protected]. There are approximately 30 spaces available.

 

Chip Design
Edge AI
Novel AI Hardware
Hardware Engineering

Author:

Ade Bamidele

Design Engineering Architect
Cadence Design Systems

Ade is an Architect in the Tensilica Central Applications Team. Ade focuses on the optimization and acceleration of imaging and vision algorithms on Vision and AI DSP and engines. He has over 15 years of experience in the R&D and optimization of computer vision and pattern recognition algorithms on vision and embedded devices. Ade graduated from University College London in 2006 with a Doctoral in Electronics Engineering and Thesis focusing on Computational Visual Attention.

Ade Bamidele

Design Engineering Architect
Cadence Design Systems

Ade is an Architect in the Tensilica Central Applications Team. Ade focuses on the optimization and acceleration of imaging and vision algorithms on Vision and AI DSP and engines. He has over 15 years of experience in the R&D and optimization of computer vision and pattern recognition algorithms on vision and embedded devices. Ade graduated from University College London in 2006 with a Doctoral in Electronics Engineering and Thesis focusing on Computational Visual Attention.

Author:

Michael Hubrig

Sr Design Engineering Architect
Cadence Design Systems

Michael is Sr. Architect in the Tensilica Central Applications Team. His team provides deep technical support for Vision and AI DSP and engines. Michael has 20 years of experience porting imaging and vision algorithms to DSP platforms.

Michael Hubrig

Sr Design Engineering Architect
Cadence Design Systems

Michael is Sr. Architect in the Tensilica Central Applications Team. His team provides deep technical support for Vision and AI DSP and engines. Michael has 20 years of experience porting imaging and vision algorithms to DSP platforms.

Author:

Rohan Darole

Sr Principal Design Engineer
Cadence Design Systems

Rohan Darole is a ML Product Specialist at Cadence TIP (Tensilica IP Group). He received his Master’s in Computer Science from SUNY-UB, Buffalo, NY. Rohan is leading a team of application engineers responsible for definition, realization, and customer engagements of Tensilica AI MAX Product Family. Previously he has worked on CV/ML Acceleration with Vision DSPs, Imaging (ISP) & Video Codecs SW Development.

Rohan Darole

Sr Principal Design Engineer
Cadence Design Systems

Rohan Darole is a ML Product Specialist at Cadence TIP (Tensilica IP Group). He received his Master’s in Computer Science from SUNY-UB, Buffalo, NY. Rohan is leading a team of application engineers responsible for definition, realization, and customer engagements of Tensilica AI MAX Product Family. Previously he has worked on CV/ML Acceleration with Vision DSPs, Imaging (ISP) & Video Codecs SW Development.

RISC-V adoption has increased dramatically throughout 2022, due to the architecture's simple intruction set, the ability to better pre-process NNs for acceleration and its open source nature. The advent of the RISC-V vector extension allows AI processor builders to develop on top of instructions that other companies are using and then innovate in whatever domain they want to specialize in.


Chip Design
Edge AI
ML at Scale
Novel
Systems Design
Hardware Engineering
Strategy
Systems Engineering

Author:

Bing Yu

Senior Technical Director
Andes Technology

Bing Yu is a Sr. Technical Director at Andes Technology. He has over 30 years of experience in technical leadership and management, specializing in machine learning hardware, high performance CPUs and system architecture. In his current role, he is responsible for processor roadmap, architecture, and product design. Bing received his BS degree in Electrical Engineering from San Jose State University and completed the Stanford Executive Program (SEP) at the Stanford Graduate School of Business.

Bing Yu

Senior Technical Director
Andes Technology

Bing Yu is a Sr. Technical Director at Andes Technology. He has over 30 years of experience in technical leadership and management, specializing in machine learning hardware, high performance CPUs and system architecture. In his current role, he is responsible for processor roadmap, architecture, and product design. Bing received his BS degree in Electrical Engineering from San Jose State University and completed the Stanford Executive Program (SEP) at the Stanford Graduate School of Business.

Cerebras Systems builds the fastest AI accelerators in the industry. In this talk we will review how the size and scope of massive natural language processing (NLP) presents fundamental challenges to legacy compute and to traditional cloud providers. We will explore the importance of guaranteed node to node latency in large clusters, how that can’t be achieved in the cloud, and how it prevents linear and even deterministic scaling. We will examine the complexity of distributing NLP models over hundreds or thousands of GPUs and show how quickly and easily a cluster of Cerebras CS-2s is set up, and how linear scaling can be achieved over millions of compute cores with Cerebras technology. And finally, we will show how innovative customers are using clusters of Cerebras CS-2s to train large language models in order to solve both basic and applied scientific challenges, including understanding the COVID-19 replication mechanism, epigenetic language modelling for drug discovery, and in the development of clean energy. This enables researchers to test ideas that may otherwise languish for lack of resources and, ultimately, reduces the cost of curiosity.  ​

 

Chip Design
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Andy Hock

VP, Product Management
Cerebras

Dr. Andy Hock is VP of Product Management at Cerebras Systems with responsibility for product strategy. His organization drives engagement with engineering and our customers to inform the hardware, software, and machine learning technical requirements and accelerate world-leading AI with Cerebras’ products. Prior to Cerebras, Andy has held senior leadership positions with Arete Associates, Skybox Imaging (acquired by Google), and Google. He holds a PhD in Geophysics and Space Physics from UCLA.

Andy Hock

VP, Product Management
Cerebras

Dr. Andy Hock is VP of Product Management at Cerebras Systems with responsibility for product strategy. His organization drives engagement with engineering and our customers to inform the hardware, software, and machine learning technical requirements and accelerate world-leading AI with Cerebras’ products. Prior to Cerebras, Andy has held senior leadership positions with Arete Associates, Skybox Imaging (acquired by Google), and Google. He holds a PhD in Geophysics and Space Physics from UCLA.

Theoretical metrics such as TOPS frequently fail to predict real-world AI chip performance accurately and to varying degrees, typically overpromise and underdeliver. There is a lot of angst and discussion about this root cause, but an often-overlooked culprit is the clock network, one of the largest networks on an SoC. 

The clock network can be the ultimate gating factor or enabler in data flow on a chip. Data can only move as far as one clock cycle allows. As chips grow larger and approach reticle limits, clock paths also significantly lengthen, further complicating existing clocking problems such as skew and silicon variation (at finer process geometries). An optimized clock network can streamline data flow and raise on-chip interconnect bandwidth.

Standard clock topologies that work well on small chips cannot scale to today’s very large chips. A new approach called intelligent clock networks, delivers an “ideal” clock close to the point of use, simplifying SoC designs and virtually eliminating overhead typically expended for clock distribution. Mo Faisal, the CEO and Founder of Movellus, will examine how intelligent clock networks can usher in a new era of big chip design for AI and HPC applications. Throughout his presentation, Mo will showcase how these new clock network types can help architects reach their architectural goals while generating differentiation in silicon cost and power efficiency in an already crowded market segment.

Chip Design
Novel AI Hardware
Hardware Engineering

Author:

Mo Faisal

Founder & CEO
Movellus

Prior to founding Movellus, Mo held positions at semiconductor companies including Intel and PMC Sierra. He received his B.S. from the University of Waterloo, and his M.S. and Ph.D. from the University of Michigan, and holds several patents. Mo was named a “Top 20 Entrepreneur” by the University of Michigan Zell Lurie Institute.

Mo Faisal

Founder & CEO
Movellus

Prior to founding Movellus, Mo held positions at semiconductor companies including Intel and PMC Sierra. He received his B.S. from the University of Waterloo, and his M.S. and Ph.D. from the University of Michigan, and holds several patents. Mo was named a “Top 20 Entrepreneur” by the University of Michigan Zell Lurie Institute.