Google is using machine learning to design the next generation of machine learning chips. Engineers at Google say the algorithm’s design is “comparable or superior” to human-made ones, but can be created much faster. According to the tech giant, tasks that would take humans months can be accomplished with AI in less than six hours.
Google has been working on how to make chips using machine learning for years, but it’s described in a paper published in this journal this week. nature — This appears to be the first time the research has been applied to a commercial product, a future version of Google’s own tensor processing unit (TPU) chip optimized for AI computations.
Co-led by Google research scientists Azalia Mirhoseini and Anna Goldie, the authors of the paper say, “Our method has been put into production to design the next-generation Google TPU.”
In other words, AI helps accelerate the future of AI development.
In the paper, engineers at Google point out that the work has “significant impact” on the chip industry. This will allow businesses to more quickly explore the possible architectural space for future designs and more easily customize the chip for specific workloads.
editorial nature Calling the study a “significant achievement,” he noted that such research could help offset the end of Moore’s Law (the axiom of 1970s chip design). This means that the number of transistors on a chip doubles every two years. AI doesn’t necessarily solve the physical problem of putting more and more transistors on a chip, but it could help find other paths to improving performance at the same rate.
The specific task handled by Google’s algorithm is called a ‘planar plan’. This usually requires a designer who uses computer tools to find the optimal layout on the silicon die for the chip’s subsystems. These components include things like CPU, GPU, and memory cores connected together using tens of kilometers of fine wiring. Deciding where to place each component on the die affects the final speed and efficiency of the chip. And given both the scale of chip fabrication and the cycle of calculations, a batch change in nanometers could eventually have devastating implications.
Engineers at Google say it takes humans “months of hard work” to design a floor plan, but there is a familiar way to solve this problem from a machine learning perspective. It’s a game.
AI has proven many times that it can outperform humans in board games like chess and Go, and engineers at Google point out that floor plans are similar to these challenges. Instead of a game board, there is a silicon die. Instead of pieces like knights and rooks, there are components like CPU and GPU. The task then is simply to find the “win conditions” for each board. In chess, where you can be a checkmate, it is computational efficiency in chip design.
Engineers at Google trained reinforcement learning algorithms on a dataset of 10,000 chips of varying quality, some of which were randomly generated. Each design was tagged with a specific “compensation” feature based on its success in various metrics such as required wire length and power usage. The algorithm then used this data to distinguish between good and bad floor plans and, in turn, generated its own designs.
Machines don’t necessarily think like humans, as we’ve seen when AI systems play against humans in board games, and often arrive at unexpected solutions to familiar problems. When DeepMind’s AlphaGo played human champion Lee Sedol in Go, this dynamism led to the infamous “move 37”. Nevertheless, placement of pieces that seemed illogical by the AI led to victory.
There’s nothing dramatic about Google’s chip design algorithm, but its floor plans are nonetheless quite different from those made by humans. Instead of neatly laid out rows of components on the die, the subsystems appear to be scattered almost randomly across the silicon. illustration of nature Human design on the left and machine learning design on the right. In the image below, you can see the general differences between Google’s papers (an orderly human on the left, a jumbled AI on the right), but the layout has been blurred due to confidentiality.
This paper is noteworthy. Especially since the research is currently being used commercially by Google. But it’s far from the only aspect of AI-enabled chip design. Google itself has used AI to navigate in other parts of the process, such as ‘architecture discovery’, and competitors like Nvidia are exploring other ways to speed up their workflows. AI’s virtuous cycle of designing chips for AI appears to be just the beginning.
Update, Thursday, June 10th 3:17pm EST: Updated to clarify that Google’s Azalia Mirhoseini and Anna Goldie are co-leaders of this paper.