Simple Science

Cutting edge science explained simply

# Computer Science# Machine Learning# Artificial Intelligence# Computation and Language# Computer Science and Game Theory# Human-Computer Interaction

Using AI to Predict Economic Choices

This study shows how AI can predict human decision-making in economic scenarios.

― 9 min read


AI's Role in EconomicAI's Role in EconomicPredictionsin economic situations.AI effectively predicts human choices
Table of Contents

Economic choice prediction is a task that involves figuring out how people will make decisions based on various choices. This task can be tricky because gathering data on how people choose is often difficult. Most studies in experimental economics have looked at straightforward choices. Recently, researchers in artificial intelligence (AI) have explored whether Large Language Models (LLMs) can replace humans in predicting these simpler choices. They have also looked into the role of machine learning in more complex settings, which include repeated interactions and language-based communication, like persuasion games.

This raises an interesting question: Can LLMs fully simulate economic environments and produce data that helps predict how people will choose, thus replacing traditional economic labs? Our study is the first step in this direction, showing that it is possible. We found that a model trained only on data generated by LLMs can effectively predict Human Behavior in a persuasion game and can even do better than models trained on real human data.

In machine learning, having good data is crucial. Large, high-quality datasets are needed for models to perform well in tasks like classifying or predicting outcomes. Machine learning models are often used to predict how people will act in economic situations. This requires access to data on human choices, which is not always available due to challenges in collecting, storing, and using this data. Creating the tools needed to collect this data can be complicated and costly, and there are also concerns about privacy and legal issues.

On the other hand, LLMs have advanced rapidly in various applications, including summarizing text, translating languages, and analyzing sentiments. Recent studies have shown that LLM-based agents can act as Decision-makers in economic environments, aiming to maximize their outcomes in complex interactions. Using LLMs to generate realistic data offers a promising new approach. If LLMs can emulate human decision-making in economic contexts, they could provide a cheaper, more efficient alternative to traditional methods for training choice prediction models.

In our work, we demonstrate the potential of this approach within a common economic scenario, focusing on a persuasion game. In these games, one player (the sender) tries to influence another player (the receiver) by presenting selective information. The sender knows more about the situation than the receiver, and their goal is to communicate in a way that sways the receiver's decisions. While there has been extensive study of various economic factors at play in persuasion games, we are interested in predicting the decisions of human receivers when they interact with fixed senders, without using any actual human choice data in our training.

Predicting how people will act within this persuasion framework is important for many fields, such as retail, e-commerce, advertising, and recommendation systems. For example, online platforms often employ algorithms to suggest products to users. If these platforms can accurately predict how people react to different persuasion tactics, they can optimize their operations to improve user engagement and sales. Importantly, the possibility of doing this without access to real human data opens doors to testing various strategies in a controlled environment, minimizing risks and maximizing efficiency.

Our Contribution

Our research shows that predicting human behavior in a language-based persuasion game can be accomplished using only LLM-generated data. We utilized a game where a travel agent (expert) tries to convince a decision-maker (DM) to choose their hotel by sharing information about it. The true quality of the hotel is something the expert keeps private, and the DM only benefits from accepting the deal if the hotel is of good quality. As the game progresses, the interactions between the expert and DM become more complex, leading to advanced strategies that may involve learning, cooperation, and even punishment. Notably, we replaced the simplistic message format of the theoretical model with real textual data.

To define our human choice prediction task, we used data collected from previous studies. Our goal was to accurately predict human choices without including any human-generated data in our training. Instead, we focused solely on LLM-generated data.

Our experiments revealed that a prediction model trained on a dataset created by LLM players could accurately predict human choices. In fact, it could outperform models trained on actual human data if there were enough samples. In many real situations, creating a large dataset from LLMs is easier than gathering even a small amount of human choice data.

Moreover, we discovered that if the expert consistently sends the best review regardless of the hotel's true quality, our model's prediction accuracy improves across all sample sizes. This simple expert strategy has been shown to work effectively in similar setups with human decision-makers.

We also found that using varied personas for LLM players reduces the sample size needed to achieve a certain level of accuracy. We analyzed the contribution of each persona type to the quality of the dataset and found that each persona type played a significant role.

Related Work

Persuasion games are a key element in economic theory and have various applications in machine learning. We looked specifically at a language-based persuasion game, a repeated two-stage game where the sender communicates first, followed by the receiver. The repeated nature of the game and the reputation of the sender significantly affect the dynamics of persuasion.

Recent research has examined how LLMs can mimic human behavior in different contexts. Some have explored whether LLMs can replace human subjects in social and behavioral research, with caution about their current limitations. Other work has shown LLMs' abilities to solve complex problems, handle creative tasks, and provide human-like responses across diverse groups. Recent studies have also evaluated LLMs in the context of classic behavioral economics experiments.

Apart from simulating human behavior, LLMs have emerged as potential decision-makers in economic setups. This marks a shift from older methods that used algorithms without language capability to solve complex games.

LLMs can enhance machine learning models in various ways. Prior studies have shown that LLMs can replace human annotators and evaluators. In generating data for machine learning, LLM-generated data has been used to improve performance in tasks like document ranking. Our focus, however, is on using LLM-generated data in strategic human choice prediction tasks.

Task Definition

To explain our human choice prediction task, we first look at the language-based persuasion game we used. The game involves two players: an expert (who sends messages) and a decision-maker (who receives messages) over several rounds. At the start of each round, the expert receives pairs of hotel reviews and scores. The hotel quality is determined by the average score from these reviews. The expert then chooses one review to send to the DM. The DM, upon receiving the expert's message, decides whether to book the hotel or not.

Both players aim to maximize their outcomes. The DM's strategy is based on all possible messages and history, while the expert's strategy is based on their knowledge of the reviews and previous interactions.

Data Collection

For our study, we utilized data collected from interactions between human DMs and Experts through a mobile app. The dataset comprised a significant number of decisions made by different human players. We focused on those who completed all stages of the game.

To generate the LLM dataset, we replicated the previous study's collection process using LLMs instead of humans. We kept the same experts, hotels, and game parameters. Each LLM player interacted with experts multiple times, following similar prompts as the human players.

To ensure a variety of responses, we assigned different personas to each LLM player. Each persona provided a distinct approach to decision-making, which allowed us to gather a broader dataset. We generated a large number of decisions, with a smaller set focused on each specific persona.

Effectiveness of LLM-Generated Data

In this section, we demonstrate how effective our approach is. We compared the performance of a prediction model trained using human-generated data versus one trained with LLM-generated data. We also included a baseline method that relied solely on linguistic capabilities without economic understanding.

We found that prediction models using LLM-generated data outperformed those using human data given a large enough sample. Additionally, models trained with LLM data outperformed the baseline, proving that incorporating simulated interaction leads to better predictions.

Interestingly, our results suggested that while baseline methods seemed closer to human behavior in some respects, they failed to serve as an effective dataset for human choice prediction compared to the LLM-generated dataset. This indicates that even though linguistic understanding is important, the strategic and economic components in LLM-generated data greatly enhance prediction capabilities.

Predicting Against a Specific Strategy

After confirming that our LLM-based approach yields high prediction accuracy, we examined the accuracy of models concerning each expert strategy. We aimed to determine how well our approach performed against various experts.

Our findings demonstrated that for most expert strategies, the LLM-based method outperformed traditional models trained on human data if enough data points were available. However, there were specific strategies where our approach fell short. Despite this, we consistently found that training with LLM data was superior to training with a linguistic-only baseline.

The SendBest Strategy

One expert strategy we analyzed was SendBest, where the expert always sends the best possible review regardless of the actual hotel quality. This strategy is relevant because it mimics typical behavior from less sophisticated agents aiming to persuade users. Interestingly, our approach was able to outperform human-generated training data for predicting responses against SendBest across all sample sizes.

The SendBestOrMean Strategy

We also assessed the SendBestOrMean strategy, where the expert chooses the best review if the hotel is good or sends a review close to the mean score if it is not. In this case, our model struggled to predict human choices accurately compared to human-generated data, especially for smaller datasets.

The Role of Persona Diversification

We highlighted that using various personas when training LLM models reduced the sample sizes needed for achieving specific accuracy levels. By examining the contributions of different personas to the model's overall quality, we found that their impacts were nearly uniform in boosting dataset value.

Conclusion

This study provides initial insights into the potential of using LLM-generated data for training human choice prediction models. Our findings suggest that data generated without human input can even yield better results than traditional methods under certain conditions. However, limitations still exist, and we note that the LLM-based approach outperformed linguistic-only methods but not always human-generated data.

Looking to the future, further research could expand the applications of LLM-generated data beyond persuasion games. Combining human and synthetic data could also improve predictions in strategic human decisions. Understanding the limitations of LLM-generated data in certain contexts will also be key for advancing the field of human choice prediction in economics.

More from authors

Similar Articles