New methods help AI forget data without accuracy loss
Researchers have developed a new method that allows large language models to forget sensitive data effectively without significantly impacting their overall performance.
Johns Hopkins University, Amazon
Authors:
Taha Entesari et al.
By framing unlearning as a constrained optimization problem, the researchers were able to remove specific data from a model while maintaining its performance on remaining tasks. This contrasts with previous methods, which often reduced model effectiveness.
The findings suggest that it’s possible to remove sensitive information from models without harming their overall utility. This could help organizations meet privacy standards such as GDPR while continuing to use effective AI systems.
The method has been tested mainly in controlled environments. Its performance in real-world settings, where unlearning needs may be more varied and frequent, still needs to be evaluated.
This method provides a practical approach to data removal in language models, showing that unlearning does not have to come at the cost of model performance.
📄 Read the full paper: Constrained Entropic Unlearning: A Primal-Dual Framework for Large Language Models
……Read full article on Tech in Asia
Technology
Comments
Leave a comment in Nestia App