Strategy | Page 3 | Kisaco Research

Strategy

Color: 
#4f4f4f
Enterprise AI
ML at Scale
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering

Author:

Daniel Wu

Strategic AI Leadership | Keynote Speaker | Educator | Entrepreneur Course Facilitator
Stanford University AI Professional Program

Daniel Wu is an accomplished technical leader with over 20 years of expertise in software engineering, AI/ML, and team development. With a diverse career spanning technology, education, finance, and healthcare, he is credited for establishing high-performing AI teams, pioneering point-of-care expert systems, co-founding a successful online personal finance marketplace, and leading the development of an innovative online real estate brokerage platform. Passionate about technology democratization and ethical AI practices, Daniel actively promotes these principles through involvement in computer science and AI/ML education programs. A sought-after speaker, he shares insights and experiences at international conferences and corporate events. Daniel holds a computer science degree from Stanford University.

Daniel Wu

Strategic AI Leadership | Keynote Speaker | Educator | Entrepreneur Course Facilitator
Stanford University AI Professional Program

Daniel Wu is an accomplished technical leader with over 20 years of expertise in software engineering, AI/ML, and team development. With a diverse career spanning technology, education, finance, and healthcare, he is credited for establishing high-performing AI teams, pioneering point-of-care expert systems, co-founding a successful online personal finance marketplace, and leading the development of an innovative online real estate brokerage platform. Passionate about technology democratization and ethical AI practices, Daniel actively promotes these principles through involvement in computer science and AI/ML education programs. A sought-after speaker, he shares insights and experiences at international conferences and corporate events. Daniel holds a computer science degree from Stanford University.

The relentless growth in the size and sophistication of AI models and data sets continues to put pressure on every aspect of AI processing systems. Advances in domain-specific architectures and hardware/software co-design have resulted in enormous increases in AI processing performance, but the industry needs even more. Memory systems and interconnects that supply data to AI processors will continue to be of critical importance, requiring additional innovation to meet the needs of future processors. Join Rambus Fellow and Distinguished Inventor, Dr. Steven Woo, as he leads a panel of technology experts in discussing the importance of improving memory and interfaces and enabling new system architectures, in the quest for greater AI/ML performance.

Chip Design
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Hardware Engineering
Strategy
Systems Engineering

Author:

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Author:

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.

Author:

Sumti Jairath

Chief Architect
SambaNova Systems

Sumti Jairath is Chief Architect at SambaNova Systems, with expertise in hardware-software co-design. Sumti worked on PA-RISC-based Superdome servers back at HP, followed by several generations of SPARC CMT processors at Sun Microsystems and Oracle. At Oracle, Sumti worked on SQL, Data-analytics and Machine Learning acceleration in SPARC processors. Sumti holds 27 patents in computer architecture and hardware-software co-design.

Sumti Jairath

Chief Architect
SambaNova Systems

Sumti Jairath is Chief Architect at SambaNova Systems, with expertise in hardware-software co-design. Sumti worked on PA-RISC-based Superdome servers back at HP, followed by several generations of SPARC CMT processors at Sun Microsystems and Oracle. At Oracle, Sumti worked on SQL, Data-analytics and Machine Learning acceleration in SPARC processors. Sumti holds 27 patents in computer architecture and hardware-software co-design.

Author:

Matt Fyles

SVP, Software
Graphcore

Matt Fyles is a computer scientist with over 20 years of proven experience in the design, delivery and the support of software and hardware within the microprocessor market. As SVP Software at Graphcore, Matt has built the company’s Poplar software stack from scratch, co-designed with the IPU for machine intelligence. He currently oversees the Software team’s work on the Poplar SDK, helping to support Graphcore’s growing community of developers.

Matt Fyles

SVP, Software
Graphcore

Matt Fyles is a computer scientist with over 20 years of proven experience in the design, delivery and the support of software and hardware within the microprocessor market. As SVP Software at Graphcore, Matt has built the company’s Poplar software stack from scratch, co-designed with the IPU for machine intelligence. He currently oversees the Software team’s work on the Poplar SDK, helping to support Graphcore’s growing community of developers.

Enterprise AI
ML at Scale
Data Science
Software Engineering
Strategy

Author:

Dr. Caiming Xiong

VP of AI Research and Applied AI
Salesforce

Dr. Caiming Xiong is VP of AI Research and Applied AI at Salesforce. Dr. Xiong holds a Ph.D. from the department of Computer Science and Engineering, University at Buffalo, SUNY and worked as a Postdoctoral Researcher Scholar at the University of California, Los Angeles (UCLA).

Dr. Caiming Xiong

VP of AI Research and Applied AI
Salesforce

Dr. Caiming Xiong is VP of AI Research and Applied AI at Salesforce. Dr. Xiong holds a Ph.D. from the department of Computer Science and Engineering, University at Buffalo, SUNY and worked as a Postdoctoral Researcher Scholar at the University of California, Los Angeles (UCLA).

We have witnessed a big paradigm shift in how AI has affected our daily lives. While AI model training is typically done in a cloud infrastructure setting, model inferencing has grown enormously on power, area, bandwidth and memory constrained edge devices.

These inferencing workloads have varying computational and memory needs, stringent power and silicon area requirements that can be very challenging to meet. AI led innovation is affecting the next generation of embedded hardware and software design alike. This talk will illustrate the design philosophies and challenges around designing best in class AI hardware accelerators.

 

Chip Design
Novel AI Hardware
Hardware Engineering
Strategy
Systems Engineering

Author:

Sriraman Chari

Fellow & Head of AI Accelerator IP Solution
Cadence Design Systems

Sriraman Chari

Fellow & Head of AI Accelerator IP Solution
Cadence Design Systems

In developing applications for a variety of different infrastructure and hardware targets, machine learning developers face a dynamic and uncertain landscape where optimization and interoperability become challenging tasks. 

This panel will address how to build infrastructure with developer efficiency in mind, so that developers can focus on creating game-changing machine learning solutions for organizations and consumers. It will also address how hardware, systems and other technology vendors can assist in this effort.

Developer Efficiency
Enterprise AI
ML at Scale
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering

Author:

Ritu Goel

Director, Product Management, Adobe Sensei
Adobe

Ritu Goel is Director of Product Management at Adobe, where she has been driving strategy for AI/ML platform since its early days with the vision of democratizing AI/ML development at Adobe. Prior to this, Ritu has spent more than a decade leading product strategy and execution of various enterprise to consumer products and platforms at eBay, Macys.com and Infosys. Ritu has a bachelor of engineering from Indian Institute of Technology, Roorkee. 

Ritu Goel

Director, Product Management, Adobe Sensei
Adobe

Ritu Goel is Director of Product Management at Adobe, where she has been driving strategy for AI/ML platform since its early days with the vision of democratizing AI/ML development at Adobe. Prior to this, Ritu has spent more than a decade leading product strategy and execution of various enterprise to consumer products and platforms at eBay, Macys.com and Infosys. Ritu has a bachelor of engineering from Indian Institute of Technology, Roorkee. 

Author:

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Author:

Sree Ganesan

VP of Product
d-Matrix

Sree Ganesan, VP of Product, d-Matrix: Sree is responsible for product management functions and business development efforts across the company. She manages the product lifecycle, definition and translation of customer needs to the product development function, acting as the voice of the customer. Prior, Sree led the Software Product Management effort at Habana Labs/Intel, delivering state-of-the-art deep learning capabilities of the Habana SynapseAI® software suite to the market. Previously, she was Engineering Director in Intel’s AI Products Group, where she was responsible for AI software strategy and deep learning framework integration for Nervana NNP AI accelerators. Sree earned a bachelor’s degree in electrical engineering from the Indian Institute of Technology Madras and a PhD in computer engineering from the University of Cincinnati, Ohio.

Sree Ganesan

VP of Product
d-Matrix

Sree Ganesan, VP of Product, d-Matrix: Sree is responsible for product management functions and business development efforts across the company. She manages the product lifecycle, definition and translation of customer needs to the product development function, acting as the voice of the customer. Prior, Sree led the Software Product Management effort at Habana Labs/Intel, delivering state-of-the-art deep learning capabilities of the Habana SynapseAI® software suite to the market. Previously, she was Engineering Director in Intel’s AI Products Group, where she was responsible for AI software strategy and deep learning framework integration for Nervana NNP AI accelerators. Sree earned a bachelor’s degree in electrical engineering from the Indian Institute of Technology Madras and a PhD in computer engineering from the University of Cincinnati, Ohio.

Author:

Daniel Wu

Strategic AI Leadership | Keynote Speaker | Educator | Entrepreneur Course Facilitator
Stanford University AI Professional Program

Daniel Wu is an accomplished technical leader with over 20 years of expertise in software engineering, AI/ML, and team development. With a diverse career spanning technology, education, finance, and healthcare, he is credited for establishing high-performing AI teams, pioneering point-of-care expert systems, co-founding a successful online personal finance marketplace, and leading the development of an innovative online real estate brokerage platform. Passionate about technology democratization and ethical AI practices, Daniel actively promotes these principles through involvement in computer science and AI/ML education programs. A sought-after speaker, he shares insights and experiences at international conferences and corporate events. Daniel holds a computer science degree from Stanford University.

Daniel Wu

Strategic AI Leadership | Keynote Speaker | Educator | Entrepreneur Course Facilitator
Stanford University AI Professional Program

Daniel Wu is an accomplished technical leader with over 20 years of expertise in software engineering, AI/ML, and team development. With a diverse career spanning technology, education, finance, and healthcare, he is credited for establishing high-performing AI teams, pioneering point-of-care expert systems, co-founding a successful online personal finance marketplace, and leading the development of an innovative online real estate brokerage platform. Passionate about technology democratization and ethical AI practices, Daniel actively promotes these principles through involvement in computer science and AI/ML education programs. A sought-after speaker, he shares insights and experiences at international conferences and corporate events. Daniel holds a computer science degree from Stanford University.

 

Edge AI
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering
Hardware Engineering

Author:

Victor Peng

President, Adaptive and Embedded Computing Group
AMD

Victor Peng is President of the Adaptive and Embedded Computing group at AMD. He is responsible for AMD’s Adaptive SmartNIC, FPGA, Adaptive SoC, embedded CPU, and embedded APU business that serve multiple market segments including the data center, communications, automotive, industrial, A&D, healthcare, test/measure/emulation, and other embedded markets. Peng also serves on the board of KLA Corporation.

Peng rejoined AMD in 2022 after 14 years at Xilinx, most recently serving as president and CEO. Prior to joining Xilinx, Peng worked at AMD as corporate vice president of silicon engineering for the graphics products group (GPG) and was the co-leader of the central silicon engineering team supporting graphics, game console products, and CPU chipsets. Prior to that, Peng held executive and engineering leadership roles at ATI, TZero Technologies, MIPS Technologies, SGI, and Digital Equipment Corp. 

Victor Peng

President, Adaptive and Embedded Computing Group
AMD

Victor Peng is President of the Adaptive and Embedded Computing group at AMD. He is responsible for AMD’s Adaptive SmartNIC, FPGA, Adaptive SoC, embedded CPU, and embedded APU business that serve multiple market segments including the data center, communications, automotive, industrial, A&D, healthcare, test/measure/emulation, and other embedded markets. Peng also serves on the board of KLA Corporation.

Peng rejoined AMD in 2022 after 14 years at Xilinx, most recently serving as president and CEO. Prior to joining Xilinx, Peng worked at AMD as corporate vice president of silicon engineering for the graphics products group (GPG) and was the co-leader of the central silicon engineering team supporting graphics, game console products, and CPU chipsets. Prior to that, Peng held executive and engineering leadership roles at ATI, TZero Technologies, MIPS Technologies, SGI, and Digital Equipment Corp. 

 

Developer Efficiency
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering

Author:

Mark Russinovich

CTO and Technical Fellow, Azure
Microsoft

Mark Russinovich is Chief Technology Officer and Technical Fellow for Microsoft Azure, Microsoft’s global enterprise-grade cloud platform. A widely recognized expert in distributed systems, operating systems and cybersecurity, Mark earned a Ph.D. in computer engineering from Carnegie Mellon University. He later co-founded Winternals Software, joining Microsoft in 2006 when the company was acquired. Mark is a popular speaker at industry conferences such as Microsoft Ignite, Microsoft Build, and RSA Conference. He has authored several nonfiction and fiction books, including the Microsoft Press Windows Internals book series, Troubleshooting with the Sysinternals Tools, as well as fictional cyber security thrillers Zero Day, Trojan Horse and Rogue Code.

Mark Russinovich

CTO and Technical Fellow, Azure
Microsoft

Mark Russinovich is Chief Technology Officer and Technical Fellow for Microsoft Azure, Microsoft’s global enterprise-grade cloud platform. A widely recognized expert in distributed systems, operating systems and cybersecurity, Mark earned a Ph.D. in computer engineering from Carnegie Mellon University. He later co-founded Winternals Software, joining Microsoft in 2006 when the company was acquired. Mark is a popular speaker at industry conferences such as Microsoft Ignite, Microsoft Build, and RSA Conference. He has authored several nonfiction and fiction books, including the Microsoft Press Windows Internals book series, Troubleshooting with the Sysinternals Tools, as well as fictional cyber security thrillers Zero Day, Trojan Horse and Rogue Code.

Chip Design
Edge AI
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering

Author:

Rashid Attar

Head of Engineering, Cloud/Edge AI Inference Accelerators
Qualcomm

Rashid Attar joined Qualcomm, San Deigo, CA, USA, and has involved in various aspects CDMA wireless data (EV-DO) and voice systems (IS-95, 1x-Advanced) in 1996, where he was the Project Engineer of CDMA2000-advanced from 2009 to 2013 and CDMA Modem Systems Lead at QCT from 20 through 2013. From 2014 to mid-2016, he led the ultra-low-power ASIC platform project. He is currently a Vice President Engineering with Corporate Research and Development, Qualcomm. He leads the ASIC and Hardware Department in Qualcomm Research. The Qualcomm Research portfolio consists of Communications (5G, Cellular V2X, Satellite Communications, Wi-Fi, and Industrial Internet of Things), ASIC and HW Research and Development, and Embedded IoE systems (Always on computer vision, Autonomous Driving, Robotics, and AR/VR). The ASIC and Hardware Group Research and Development portfolio consists of 5G (RFICs, PAs, Interfaces, Packaging), processors (CPUs, Programmable deep learning accelerators), ultra-low-power platform (processor, communications, memory, machine learning accelerators, power management, wireless charging), core CMOS Research and Development (3-DIC and Thermal-aware designs), and Antenna Design. He holds approximately 160 granted U.S. patents

Rashid Attar

Head of Engineering, Cloud/Edge AI Inference Accelerators
Qualcomm

Rashid Attar joined Qualcomm, San Deigo, CA, USA, and has involved in various aspects CDMA wireless data (EV-DO) and voice systems (IS-95, 1x-Advanced) in 1996, where he was the Project Engineer of CDMA2000-advanced from 2009 to 2013 and CDMA Modem Systems Lead at QCT from 20 through 2013. From 2014 to mid-2016, he led the ultra-low-power ASIC platform project. He is currently a Vice President Engineering with Corporate Research and Development, Qualcomm. He leads the ASIC and Hardware Department in Qualcomm Research. The Qualcomm Research portfolio consists of Communications (5G, Cellular V2X, Satellite Communications, Wi-Fi, and Industrial Internet of Things), ASIC and HW Research and Development, and Embedded IoE systems (Always on computer vision, Autonomous Driving, Robotics, and AR/VR). The ASIC and Hardware Group Research and Development portfolio consists of 5G (RFICs, PAs, Interfaces, Packaging), processors (CPUs, Programmable deep learning accelerators), ultra-low-power platform (processor, communications, memory, machine learning accelerators, power management, wireless charging), core CMOS Research and Development (3-DIC and Thermal-aware designs), and Antenna Design. He holds approximately 160 granted U.S. patents

Transformers are in high demand, particularly in industries like BFSI and healthcare, for language processing, understanding, classification, generation and translation. The parameter counts for models like GPT, that are fast becoming the norm in the world of NLP, are mind-boggling, and the cost involved in training and deploying even more so. If the vast potential for LLMs is to extend beyond the wealthiest companies and research institutions on the planet, then there is a need to evaluate how to lower the barriers of entry for experimentation and research on models like GPT. There's also a need to discuss the extent to which bigger is better, in the field of practical and commercial NLP.

This panel will look at the state of play of how enterprises are using large language models today, what their plans are for future research in NLP, and how hardware & systems builders and organizations like HuggingFace can help bring state-of-the-art performance into production in smaller, more resource-constrained enterprises and labs.

Developer Efficiency
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Phil Brown

VP, Scaled Systems Product
Graphcore

Phil leads Graphcore’s efforts to build large scale AI/ML processing capability using Graphcore unique Intelligence Processing Units (IPUs) and IPU-Fabric and Streaming Memory technology. Previously he has held a number of different roles at Graphcore including Director of Applications, leading development of Graphcore’s flagship AL/ML models, and Director of Field Engineering, which acts as the focal point for technical engagements with our customers. Prior to joining Graphcore, Phil worked for Cray Inc. in a number of roles, including leading their engagement with the weather forecasting and climate research customers worldwide and as a technical architect. Phil holds a PhD in Computational Chemistry from the University of Bristol.

Phil Brown

VP, Scaled Systems Product
Graphcore

Phil leads Graphcore’s efforts to build large scale AI/ML processing capability using Graphcore unique Intelligence Processing Units (IPUs) and IPU-Fabric and Streaming Memory technology. Previously he has held a number of different roles at Graphcore including Director of Applications, leading development of Graphcore’s flagship AL/ML models, and Director of Field Engineering, which acts as the focal point for technical engagements with our customers. Prior to joining Graphcore, Phil worked for Cray Inc. in a number of roles, including leading their engagement with the weather forecasting and climate research customers worldwide and as a technical architect. Phil holds a PhD in Computational Chemistry from the University of Bristol.

Author:

Selcuk Kopru

Director, Engineering & Research, Search
eBay

Selcuk Kopru is Head of ML & NLP at eBay and is an experienced AI leader with proven expertise in creating and deploying cutting edge NLP and AI technologies and systems. He is experienced in developing scalable Machine Learning solutions to solve big data problems that involve text and multimodal data. He is also skilled in Python, Java, C++, Machine Translation and Pattern Recognition. Selcuk is also a strong research professional with a Doctor of Philosophy (PhD) in NLP in Computer Science from Middle East Technical University.

Selcuk Kopru

Director, Engineering & Research, Search
eBay

Selcuk Kopru is Head of ML & NLP at eBay and is an experienced AI leader with proven expertise in creating and deploying cutting edge NLP and AI technologies and systems. He is experienced in developing scalable Machine Learning solutions to solve big data problems that involve text and multimodal data. He is also skilled in Python, Java, C++, Machine Translation and Pattern Recognition. Selcuk is also a strong research professional with a Doctor of Philosophy (PhD) in NLP in Computer Science from Middle East Technical University.

Author:

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Author:

Morteza Noshad

Senior ML/NLP Scientist
Vida Health

Morteza Noshad is a senior ML/NLP scientist at Vida health. He is skilled at designing large scale NLP models for different healthcare applications such as automated clinical documentation, symptom detection and question answering. Morteza was a research scientist at Stanford University focusing on graph neural networks for clinical decision support systems where he received the SAGE Scientist Award for his research. Morteza received his Ph.D. in Computer Science from University of Michigan where he contributed to the theory of information bottleneck in deep learning. 

Morteza Noshad

Senior ML/NLP Scientist
Vida Health

Morteza Noshad is a senior ML/NLP scientist at Vida health. He is skilled at designing large scale NLP models for different healthcare applications such as automated clinical documentation, symptom detection and question answering. Morteza was a research scientist at Stanford University focusing on graph neural networks for clinical decision support systems where he received the SAGE Scientist Award for his research. Morteza received his Ph.D. in Computer Science from University of Michigan where he contributed to the theory of information bottleneck in deep learning. 

Vision
NLP and Speech
Connectivity and 5G
Chip and Systems Design
Innovation at the Edge
Edge Trade Offs
On Device ML
Data Science
Hardware and Systems Engineering
Software Engineering
Strategy
Industry & Investment