Machine Learning at the Edge: Using HLS to Optimize Power and Performance
Moving machine learning to the edge has critical requirements on power
and performance. Using off-the-shelf solutions is not practical. CPUs are
too slow, GPUs/TPUs are expensive and consume too much power, and
even generic machine learning accelerators can be overbuilt and are not
optimal for power. In this paper, learn about creating new power/memory
efficient hardware architectures to meet next-generation machine learning
hardware demands at the edge.
Full-access members only
Register your account to view Machine Learning at the Edge: Using HLS to Optimize Power and Performance
Full-access members gain access to our free tools and training, including our full library of articles, recorded sessions, seminars, papers, learning tracks, in-depth verification cookbooks, and more.