The emerging Processing-in-Memory (PIM) architecture shows promise in efficiently handling Deep Neural Network (DNN) inference and training by minimizing data movement through analog computing in memory. Unfortunately, the energy efficiency of PIM is still limited since it struggles to efficiently process the massive nonlinear activation functions (AFs) widely used in DNNs. Within the current solutions, the AFs require to be fulfilled in the digital domain with co-processors or lookup-table (LUT) modules. Therefore, a significant amount of data requires a round-trip conversion between the analog and the digital domains at each AF. In this way, the unavoidable analogto-digital and digital-to-analog (AD/DA) conversions dominate the system's power consumption and reduce the overall energy efficiency. To address these issues, we propose a reconfigurable analog module for nonlinear AF computation, named RAM-NAF. RAM-NAF is a pure analog module that utilizes Taylor approximation (TA) to fit the arbitrary AFs, thereby reducing AD/DA conversions. To enhance the accuracy, RAM-NAF adopts a segmentation calculation method (SCM) based on the characteristics of nonlinear AFs. Moreover, the RAM-NAF provides the capability for reconfiguration to support various AFs and can be easily integrated with the existing PIM accelerators. The experiment results show that the proposed RAM-NAF significantly improves the performance of the PIM accelerators and reduces the energy consumption: when performing inference on various PIM accelerators, the energy consumption of AD/DA conversions can be reduced up to 12.31×, while the overall energy efficiency can be increased by 2.34× to 5.20×, and the accuracy loss is below 1%.