Overview

Misinformation, disinformation, and “fake news” have been used as a means of influence for millennia, but the proliferation of the internet and social media in the 21st century has enabled nefarious campaigns to achieve unprecedented scale, speed, precision, and effectiveness. In the past few years, there has been significant recognition of the threats posed by malign influence operations to geopolitical relations, democratic institutions and processes, public health and safety, and more. At the same time, the digitization of communication offers tremendous opportunities for human language technologies (HLT) to observe, interpret, and understand this publicly available content. The ability to infer intent and impact, however, remains much more elusive.

In the first half of this tag-team keynote, Dr. Danelle Shah will highlight the importance of deriving insights, not just from the content of online influence campaigns, but also from the social and cultural context in which they are waged. She will discuss the nuances of truth and intent and highlight the challenges and opportunities for HLT to combat misinformation when truth is subjective and words don’t mean what they say.

Finally, Dr. Joseph Campbell will share several HLT highlights from MIT Lincoln Laboratory. These include key enabling technologies in combating misinformation to link personas, analyze content, and understand human networks. Developing operationally relevant technologies requires access to corresponding data with meaningful evaluations, as Dr. Douglas Reynolds presented in his keynote. As Danelle discussed, it’s crucial to develop these technologies to operate at deeper levels than the surface. Producing reliable information from the fusion of missing and inherently unreliable information channels is paramount. Furthermore, the dynamic misinformation environment and the coevolution of allied methods with adversarial methods represent additional challenges.