AI is emerging as the foundational capability that defines the user experience and is reshaping the semiconductor industry into a high volume, multi-market business spanning an unprecedented diversity of power envelopes and form factors—from sub-5W battery-powered devices like AI pins and up to 500W cloud servers. This transition is also driving distributed, hybrid AI across personal devices, edge, and cloud, enabling personal AI systems that deliver low latency, improved privacy, and higher reliability – essential benefits of Edge AI.
To design silicon products that address the diverse AI workloads, power budgets, and form factors for these markets, we can leverage a common, modular IP library —including CPU, GPU, AI accelerators, connectivity, and security. However, the current process of adapting each IP and integrating it into a new SoC demands significant human effort. To minimize the NRE, we need tools that can support and automate the process, enabling engineering teams to quickly pivot and assemble SoCs that can meet the full spectrum of needs, from very low-power requirements to very high-performance.
Moreover, we face additional challenges with the slowing of Moore's law and the rise of chiplets. This necessitates new tools for 3DIC floor planning, package design, and thermal design.
During the talk, I will focus on the gaps between the current state of EDA tools and the needs of the semiconductor industry to leverage common IP across diverse products. I will also share perspectives on approaches that help to close these gaps.