MIT CSAIL’s TextFooler generates adversarial text to strengthen natural language models
February 7, 2020
AI and machine learning algorithms are vulnerable to adversarial samples that have alterations from the originals. That’s especially problematic as natural language models become capable of generating humanlike text, because of their attractiveness to malicious actors who would use them to produce misleading media. In pursuit of a technique that illustrates the extent to which adversarial text…