Social media platforms are a powerful tool to spread information — and misinformation — about health issues such as vaccines and cancer prevention. How does bad information spread online, and what is the best way to stop it? That is a topic being studied by Assistant Professor Jingwen Zhang and her students in the Department of Communication.
In a Q&A for the medical website Infectious Diseases Consultant, graduate student Jade (Jieyu) Ding Featherstone talks about their recent study of tweets related to influenza vaccine.
Featherstone, Zhang and graduate student Quisi Sun collected six weeks’ worth of tweets about influenza at the peak of the 2017–18 flu season. From more than 120,000 tweets, they used machine learning to classify 8.6 percent as misinformation and 91.4 percent as non-misinformation. The misinformation tweets (7,814) clustered around topics including government conspiracies, media scams and President Trump.
Nurses and doctors should be aware of current discourses and claims on vaccine misinformation so that they can educate and warn their patients, Featherstone told IDCon. It’s an issue of ethics for social media companies that provide a platform for information and possibly a matter for legislation if misinformation threatens public health, she said.
Earlier this year, Zhang’s lab published a study on how cancer prevention messages spread on Twitter. Working with a sample of more than 100,000 tweets, they found that messages about cervical cancer screening and HPV vaccination from institutions and organizations spread more widely than personal stories.
Machine learning tools to analyze social media can be a useful tool for public health campaigns, Featherstone said.
The work on influenza vaccines was presented at the annual meeting of the American Public Health Association in November 2019.
— Andy Fell, UC Davis News and Media Relations, wrote this article for the UC Davis Egghead research blog.