Assessing AI's Real-World Impact Through News Media
Examining how news shapes views on AI's negative effects.
Mowafak Allaham, Kimon Kieslich, Nicholas Diakopoulos
― 6 min read
Table of Contents
- Tapping into News Media
- The Challenge of Finding Impacts
- The Big Idea Behind Our Research
- Where We Got Our Information
- How We Analyzed the Data
- Breakdown of Negative Impacts
- Results of Our Analysis
- What This Means for AI Developers
- Limitations of Our Study
- A Cautionary Note
- Wrapping Up
- Original Source
- Reference Links
When it comes to studying how AI affects our lives, researchers usually think about frameworks based on expert opinions. But here's the catch: these expert opinions often miss the real-world effects of AI on everyday people. You see, the way people feel about AI can change based on where they're from and their life experiences. In this article, we’ll look at how we can improve these assessments by tapping into what’s being said in the news.
Tapping into News Media
We decided to check out News Articles from around the world to see the stories they tell about AI. By focusing on how AI is perceived negatively in these articles, we can gather diverse opinions and experiences that experts might miss. This is important because the media shapes how people think about technology. If the news isn’t covering certain impacts, those issues might just fade away into the background.
The Challenge of Finding Impacts
Identifying how AI can negatively impact Society is no small task. It’s tricky and requires a lot of resources. Researchers have tried different frameworks to assess these impacts, but these often reflect their own backgrounds and biases. While they might highlight certain concerns, they might miss other important ones, especially those relevant to different cultures or communities.
This is why we think it’s a good idea to use large language models (LLMs) to help us analyze impacts. These models can process vast amounts of information quickly, but they aren’t perfect. They can reflect biases present in the data they were trained on. So, while it could be a smart move to use LLMs in this context, we need to be careful about what insights we gather from them.
The Big Idea Behind Our Research
Our main goal? To enhance impact assessments by using a wide range of views from news articles. By adjusting LLMs to focus on the Negative Impacts of AI mentioned in the news, we can help developers and researchers understand potential issues before they deploy new technologies. This can help ensure that diverse voices are heard in discussions about AI's future.
Where We Got Our Information
To dive into this, we gathered 91,930 news articles published between January 2020 and June 2023. These articles came from 266 different news sources across 30 countries. We then focused on identifying discussions around negative impacts resulting from AI technologies. In total, 17,590 articles in our collection made some mention of these negative impacts, indicating that people are definitely talking about the risks of AI.
How We Analyzed the Data
We developed a systematic way to summarize information from the articles. For each article, we pulled out two main pieces of information: a description of the AI system discussed and the negative impacts associated with it. This information then allowed us to create a dataset that helps researchers assess the negative impacts of AI more effectively.
Breakdown of Negative Impacts
From our analysis, we found ten different categories of negative impacts mentioned in the news articles:
-
Societal Impacts: These impacts highlight how AI can affect society, such as spreading misinformation or undermining public trust through deepfakes.
-
Economic Impacts: This covers job losses and economic uncertainty caused by AI, such as replacing human workers with automated chatbots.
-
Privacy: Discussions on privacy often center around surveillance technologies, like facial recognition, that may compromise individuals’ rights.
-
Autonomous System Safety: This addresses the risks associated with technologies like self-driving cars or drones, which could lead to accidents or injuries.
-
Physical and Digital Harms: Impacts in this category discuss dangers to both physical and digital spaces, including harmful AI behaviors online and risks in warfare.
-
AI Governance: This category sheds light on the need for regulations to manage AI technologies responsibly and ensure accountability.
-
Accuracy and Reliability: Concerns around AI sometimes revolve around how reliable the outputs are, with issues like "hallucinations" or wrong information arising.
-
AI-generated Content: AI’s ability to produce various forms of content can make it hard to distinguish between fake and real items, raising ethical questions.
-
Security: Cyber threats using AI technologies, like phishing attacks, fall under this category and could endanger sensitive information.
-
Miscellaneous Risks: This includes any other negative impacts that didn’t fit into the previous categories, such as the environmental cost of training AI models.
Results of Our Analysis
We evaluated the generated impacts from both fine-tuned models and larger models to see how well they compared in quality. Surprisingly, we found that smaller models, specifically one fine-tuned on news media, could produce impacts similar to those from a larger model. However, the smaller model managed to capture more diverse types of impacts that the larger model missed.
What This Means for AI Developers
The findings from this research show that using news media can help us understand the societal concerns around AI better. It opens up avenues for builders and researchers to think about the broader implications of their technologies. By recognizing a wider range of negative impacts, we can help ensure that future AI development includes diverse voices, especially those that are often overlooked.
Limitations of Our Study
Of course, our study comes with its own set of limitations. News media can contain biases, which could influence the type of impacts we were able to assess. For example, the credibility of news outlets, political leanings, and other factors can skew the data. That’s why it’s essential for future research to reflect on these biases and how they affect the impacts generated by AI.
A Cautionary Note
While our fine-tuned models are helpful, there's a risk of relying too heavily on them. If people start thinking that the outputs of these models are conclusive, it can lead to laziness in critical thinking. Tools like these should aid the assessment process, not replace human analysis.
Wrapping Up
In conclusion, our work points to exciting opportunities in the field of AI impact assessments. By leveraging news media and using advanced models, we can gain a clearer picture of how AI technologies may affect society. This can guide developers and policymakers in making informed decisions that genuinely reflect the needs and concerns of all people.
So, the next time you read about AI in the news, remember that it’s not just about tech-it’s about real lives, real concerns, and the diverse opinions that shape our world. The future of AI needs all voices to join in the conversation. And let’s be honest: who wouldn’t appreciate a little more dialogue about this technology that’s becoming a bigger part of our lives every day?
Title: Towards Leveraging News Media to Support Impact Assessment of AI Technologies
Abstract: Expert-driven frameworks for impact assessments (IAs) may inadvertently overlook the effects of AI technologies on the public's social behavior, policy, and the cultural and geographical contexts shaping the perception of AI and the impacts around its use. This research explores the potentials of fine-tuning LLMs on negative impacts of AI reported in a diverse sample of articles from 266 news domains spanning 30 countries around the world to incorporate more diversity into IAs. Our findings highlight (1) the potential of fine-tuned open-source LLMs in supporting IA of AI technologies by generating high-quality negative impacts across four qualitative dimensions: coherence, structure, relevance, and plausibility, and (2) the efficacy of small open-source LLM (Mistral-7B) fine-tuned on impacts from news media in capturing a wider range of categories of impacts that GPT-4 had gaps in covering.
Authors: Mowafak Allaham, Kimon Kieslich, Nicholas Diakopoulos
Last Update: 2024-11-04 00:00:00
Language: English
Source URL: https://arxiv.org/abs/2411.02536
Source PDF: https://arxiv.org/pdf/2411.02536
Licence: https://creativecommons.org/licenses/by/4.0/
Changes: This summary was created with assistance from AI and may have inaccuracies. For accurate information, please refer to the original source documents linked here.
Thank you to arxiv for use of its open access interoperability.
Reference Links
- https://arxiv.org/abs/2306.05949
- https://dl.acm.org/doi/10.1145/3442188.3445935
- https://arxiv.org/abs/2011.13170
- https://dblp.org/rec/journals/corr/abs-2011-13170.bib
- https://dblp.org
- https://arxiv.org/abs/2305.07153
- https://arxiv.org/abs/2108.07258
- https://dblp.org/rec/journals/corr/abs-2108-07258.bib
- https://dl.acm.org/doi/10.1145/3531146.3533088
- https://dl.acm.org/doi/10.1145/3461702.3462608
- https://doi.org/10.1145/3306618.3314285
- https://doi.org/10.1162/tacl
- https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl
- https://arxiv.org/abs/2210.05791
- https://dsa-observatory.eu/2024/07/31/what-do-we-talk-about-when-we-talk-about-risk-risk-politics-in-the-eus-digital-services-act/