The future of AI begins at the sensor. Join BrainChip for this exploration of relevant data propagation, regions of interest and making the applications of tomorrow more efficient today by processing at the sensor.
- How does computer vision work?
- Overview of use cases
- References
- Short slide of offering
- Rule Based Engine
- Alert system or reporting
- Deployment & implementation strategies
Anthony Valle
Anthony Valle is a Senior Pre-Sales Engineer for North America and Latin America at Ipsotek, an Atos company. Anthony has over 20 years of experience in IT and security technology solutions for the rapidly growing tech-based world. He works closely with clients in developing solutions for AI at the Edge, utilizing a patented Scenario-Based Rule Engine (SBRE), a powerful tool to precisely define behaviors of interest as they would unfold in the real-world dynamic and complex environment.
Prior to joining Atos, he performed first Sales Engineering and later Application engineering roles for Avigilon, one of the world's largest security manufacturers. Throughout his career, he has held key management positions within the industry and sought many certifications to further his career in security technology.
Deep neural networks (DNNs), a subset of machine learning (ML), provide a foundation for automating conversational artificial intelligence (CAI) applications. FPGAs provide hardware acceleration enabling high-density and low latency CAI. In this presentation, we will provide an overview of CAI, data center use-cases, describe the traditional compute model and its limitations and show how an ML compute engine integrated into the Achronix FPGA can lead to 90% cost reductions for speech transcription.
Salvador Alvarez
- Salvador Alvarez is the Senior Manager of Product Planning at Achronix, coordinating the research, development, and launch of new Achronix products and solutions. With over 20 years of experience in product growth, roadmap development, and competitive intelligence and analysis in the semiconductor, automotive, and edge AI industries, Sal Alvarez is a recognized expert in helping customers realize the advantages of edge AI and deep learning technology over legacy cloud AI approaches. Sal holds a B.S. in computer science and electrical engineering from the Massachusetts Institute of Technology.
Achronix
Website: https://www.achronix.com/
Achronix is a leading manufacturer of FPGA and eFPGA IP data acceleration solutions specifically tuned for high-performance AI and ML applications. FPGAs are paving the way for the next era in AI applications and the ubiquitous building blocks for AI deployments from the cloud, to the edge to IoT. Our revolutionary new 7nm Speedster®7t FPGAs and Speedcore™ eFPGA IP are optimized for high-bandwidth workloads and eliminate the performance bottlenecks associated with traditional FPGAs.
Delivering ASIC-like performance, Speedster7t FPGAs are highly configurable, highly flexible compute engines. Built with high-performance 112Gbps transceivers, high-bandwidth GDDR6 interfaces, and high-speed PCIe Gen5 ports, Speedster7t FPGAs provide the high speed data and memory interfaces necessary for AI/ML applications. The Speedster7t FPGAs also feature a new machine learning processor (MLP) which supports new AI/ML number formats such as block floating port and provides >80 TOPs performance.
Achronix’ Speedcore eFPGA IP brings the power and flexibility of programmable logic to ASICs and SoCs. Speedcore IP can be seamlessly integrated into a custom design and is the only eFPGA technology shipping in high-volume production today. With Speedcore IP, customers define both resource counts and mix for logic, embedded memory blocks, MLP, and DSP blocks at up to 90% cost savings vs. traditional standalone FPGA solutions.
Visit achronix.com to learn more about our FPGA technology optimized for AI/ML applications.
As AI makes its way into healthcare and medical applications, the role of hardware accelerators in the successful deployment of such large AI models becomes more and more important. Nowadays large language models, such as GPT-3 and T5, offer unprecedented opportunities to solve challenging healthcare business problems like drug discovery, medical term mapping and insight generation from electronic health records. However, efficient and cost effective training, as well as deployment and maintenance of such models in production remains a challenge for healthcare industry. This presentation will review a few open challenges and opportunities in the healthcare industry and the benefits that AI hardware innovation may bring to the ML utilization.
Hooman Sedghamiz
Hooman Sedghamiz is Director of AI & ML at Bayer. He has lead algorithm development and generated valuable insights to improve medical products ranging from implantable, wearable medical and imaging devices to bioinformatics and pharmaceutical products for a variety of multinational medical companies.
He has lead projects, data science teams and developed algorithms for closed loop active medical implants (e.g. Pacemakers, cochlear and retinal implants) as well as advanced computational biology to study the time evolution of cellular networks associated with cancer , depression and other illnesses.
His experience in healthcare also extends to image processing for Computer Tomography (CT), iX-Ray (Interventional X-Ray) as well as signal processing of physiological signals such as ECG, EMG, EEG and ACC.
Recently, his team has been working on cutting edge natural language processing and developed cutting edge models to address the healthcare challenges dealing with textual data.
One of the biggest challenges in the US is managing the cost of healthcare. Although we have high healthcare costs in the US, our life expectancy is still average. In this talk we will look at some of the core causes of healthcare costs and what modern AI hardware can do to lower these costs. We will see that faster and bigger GPUs alone will not save us. We need detailed models to across a wide swath of our communities and perform early interventions. We need accurate models of our world and the ability to simulate the impact of policy changes to overall healthcare costs. We need new MIMD hardware with cores and memory architecture that keep cores fed with the right data.
Dan McCreary
Dan is a distinguished engineer in AI working on innovative database architectures including document and graph databases. He has a strong background in semantics, ontologies, NLP and search. He is a hands-on architect and like to build his own pilot applications using new technologies. Dan started the NoSQL Now! Conference (now called the Database Now! Conferences). He also co-authored the book Making Sense of NoSQL, one of the highest rated books on Amazon on the topic of NoSQL. Dan worked at Bell Labs as a VLSI circuit designer where he worked with Brian Kernighan (of K&R C). Dan also worked with Steve Jobs at NeXT Computer.
Harshit Khaitan
Harshit Khaitan is the Director of AI Accelerator at Meta where he leads building AI Accelerators for Reality labs products. Prior to Meta, he was technical lead and co-founder for the Edge Machine learning accelerators at Google, responsible for MLA in Google Pixel 4 (Neural Core) and Pixel 6 (Google Tensor SoC). He has also held individual and technical leadership positions at Google’s first Cloud TPU, Nvidia Tegra SoCs and Nvidia GPUs. He has 10+ US and international patents in On-device AI acceleration. He has a Master’s degree in Computer Engineering from North Carolina State University and a Bachelor’s degree in Electrical Engineering from Manipal Institute of Technology, India.
Daniel Wu
Daniel Wu is an accomplished technical leader with over 20 years of expertise in software engineering, AI/ML, and team development. With a diverse career spanning technology, education, finance, and healthcare, he is credited for establishing high-performing AI teams, pioneering point-of-care expert systems, co-founding a successful online personal finance marketplace, and leading the development of an innovative online real estate brokerage platform. Passionate about technology democratization and ethical AI practices, Daniel actively promotes these principles through involvement in computer science and AI/ML education programs. A sought-after speaker, he shares insights and experiences at international conferences and corporate events. Daniel holds a computer science degree from Stanford University.
Girish Venkataramani
Ravi Narayanaswami
Prasun Raha
Nikunj Kotecha
Nikunj Kotecha is a Machine Learning Solutions Architect at BrainChip Inc. Currently, he works on developing and optimizing Machine Learning algorithms for the AkidaTM neuromorphic hardware. He also demonstrating capabilities of AkidaTM to client and supports with their neuromorphic solutions for AkidaTM. He has a Master of Science in Computer Science, where he specialized in concepts of Artificial intelligence with Deep Learning algorithms. At the time, he was a part of the Machine Learning lab and has published technical papers, supported research into different avenues of AI. He published research on Cross-Modal Fusion with Transformer architecture for Sign Language translation during the completion of his Masters. He has also worked at Oracle, where he build and integrated Machine Learning solutions to provide operational benefits of using Oracle Clinical Trial software.