Article ID: 20.20230427
This brief introduces an area-efficient AdderNet hardware accelerator. AdderNet replaces multiply-accumulate computations of neural network processing with addition operations, thereby reducing computational cost. However, the previous accelerator uses two adders for a kernel computation to implement an absolute value computation, which still has circuit redundancy. For the efficient AdderNet acceleration, we propose a reconfigurable kernel unit and merged adder tree structure to relax such a computational circuit overhead. The proposed merged adder tree reduces the computing area by 23-28% compared to the state-of-the-art AdderNet hardware architecture.