Simple Science

Cutting edge science explained simply

# Computer Science# Cryptography and Security

Evaluating User Experiences with Modern CAPTCHAs

This study analyzes how users interact with various CAPTCHA types.

― 7 min read


CAPTCHA User ExperienceCAPTCHA User ExperienceStudychallenges with modern CAPTCHAs.Analyzing user interactions and
Table of Contents

CAPTCHAs are tools used on websites to tell humans apart from machines, like bots. They make users complete simple tasks that are easy for people but hard for bots. These tasks might include typing in letters from a distorted image or selecting images that fit a specific requirement. Over the years, CAPTCHAs have become more complex to keep up with the increasing ability of bots to solve them. This ongoing battle means it is vital to know how long real users take to solve modern CAPTCHAs and how they feel about them.

Purpose of the Study

Modern CAPTCHAs are often challenging for both machines and people. This study examines how quickly users can solve current CAPTCHAs and what people think about them. We want to find out if there are differences based on the type of CAPTCHA and if Context matters. For example, is it different when users solve CAPTCHAs while trying to create an account versus just solving them directly?

Background on CAPTCHAs

CAPTCHAs have been around for almost twenty years. They help protect websites from bots that might scrape information, create fake accounts, or misuse website features. The simplest CAPTCHAs asked users to type in distorted letters. However, as technology developed, bots learned how to solve these basic CAPTCHAs with great accuracy. Some bots even have humans working behind the scenes to solve CAPTCHAs for them.

As a response, more sophisticated CAPTCHAs have appeared. Today, you might be asked to identify objects in images or complete puzzles. Because CAPTCHAs are meant to be easily solved by humans, it is essential to measure how long they take to solve and how users perceive them.

Prior Research

There have been studies on how long it takes people to solve different CAPTCHAs. One notable study recruited many participants to measure solving times and find out how users felt about the different CAPTCHA types. Past studies showed that some CAPTCHAs took longer to solve than expected, and more frustrating tasks often led to higher abandonment rates. This means users left the task before finishing it if it took too long.

In a more recent study, researchers looked at new types of CAPTCHAs and compared them to traditional text and image CAPTCHAs. They found that newer types performed better in terms of user satisfaction and speed.

Research Questions

Our study aims to answer several key questions:

  1. How long do users take to solve different CAPTCHAs?
  2. What types of CAPTCHAs do users prefer?
  3. Does the context in which a user solves a CAPTCHA influence the time it takes?
  4. Are there differences in solving time based on demographic factors like age or education?
  5. Does the context affect how many users abandon the task?

Methodology

To answer these questions, we manually inspected 200 popular websites to see how many used CAPTCHAs and what types they used. We then conducted a user study with people solving different CAPTCHAs. Participants were asked to solve a set of CAPTCHAs and provide feedback on their experiences.

We divided the user study into two groups. One group solved CAPTCHAs directly, while the other group solved them as part of account creation. This allowed us to see how context impacted solving times.

Website Inspection Results

In our inspection of 200 websites, we found that many still use traditional CAPTCHAs, like reCAPTCHA, distorted text, and slider-based CAPTCHAs. Here’s a quick breakdown of our findings:

  • reCAPTCHA appeared on 34% of the inspected websites, making it the most common.
  • Distorted text CAPTCHAs were also prevalent, with various styles across different websites.
  • Slider-based CAPTCHAs appeared on 7% of the sites, providing interactive challenges for users.

These findings indicate that while CAPTCHAs have evolved, many websites still rely on familiar formats.

User Study Overview

The user study enlisted participants from a crowdsourcing platform. Each participant was required to solve ten different CAPTCHAs. We gathered information on solving times, preferences, and participant demographics. Each CAPTCHA was unique for each participant to ensure a genuine experience.

Participants were compensated for their time, with higher pay offered for more complex tasks. This aimed to understand how financial incentives could influence task completion rates.

Findings on Solving Times

Overall, we found that solving times varied significantly across different CAPTCHA types. Some key observations included:

  • reCAPTCHA (click) had the shortest solving times, taking an average of around 3 to 5 seconds.
  • The distorted text CAPTCHAs had varied times, with simpler versions being solved faster than more complex ones.
  • CAPTCHAs requiring interactions, like sliding or rotating objects, took longer, averaging 18 to 42 seconds.

Interestingly, the complexity of the CAPTCHA didn’t always relate directly to solving time. Some CAPTCHAs that were perceived as simple took longer, depending on user familiarity with the type.

User Preferences

When asked about their preferences, participants rated the CAPTCHAs on a scale from 1 to 5. The preference scores showed that:

  • CAPTCHAs with lower solving times generally received higher enjoyment scores.
  • However, some types that took longer to solve, like puzzle-based CAPTCHAs, still scored high due to their engaging nature.

This suggests that factors beyond just time, like enjoyment and user experience, influence how people feel about CAPTCHAs.

Contextual Influence on Solving Times

We compared the solving times of participants between two distinct settings: direct solving and contextualized tasks. Results showed that:

  • Participants solved CAPTCHAs faster in direct settings.
  • In contrast, those solving CAPTCHAs while completing account forms took longer, with increases in average solving times up to 57%.

This highlights the importance of context in how people engage with CAPTCHAs on websites.

Demographic Influence on Solving Times

We examined how demographic factors like age and education level played a role in solving times. Our findings indicated:

  • Older participants tended to take longer to solve CAPTCHAs than younger participants.
  • Interestingly, education level did not significantly affect solving times for all types of CAPTCHAs, which was unexpected.

These insights can help website designers understand how to tailor the CAPTCHA experience for different user groups.

Abandonment Rates

User abandonment during CAPTCHAs is a critical aspect of the experience. We found that:

  • Abandonment rates ranged significantly, with as many as 45% of participants leaving before completing their task.
  • Context played a large role, as participants working on account creation were more likely to abandon the task compared to those asked to solve CAPTCHAs directly.

This emphasizes the need for thoughtful design when implementing CAPTCHAs in user journeys to minimize frustration.

Conclusion

CAPTCHAs continue to serve an essential role in securing websites against bots. However, understanding user experiences with them is vital. Our findings suggest that solving times can vary widely based on the type of CAPTCHA, context, and user demographics. Additionally, preferences demonstrate that enjoyment plays a significant part in how these tools are received by users.

Future work will focus on more controlled studies to gather finer detail on the user experience and examine why certain CAPTCHAs lead to higher abandonment rates. This understanding could lead to improved designs that are both effective against bots and friendly for users.

Directions for Future Research

To further build on the findings of this study, several avenues are available for future research:

  • Investigate the reasons behind user abandonment and seek to identify design changes to reduce it.
  • Explore the emotional responses of users when engaging with different types of CAPTCHAs.
  • Analyze the effectiveness of varying payment structures on participant engagement and performance in CAPTCHA tasks.

By deepening our understanding of how users interact with CAPTCHAs, we can develop even better tools to balance security and user experience.

Original Source

Title: An Empirical Study & Evaluation of Modern CAPTCHAs

Abstract: For nearly two decades, CAPTCHAs have been widely used as a means of protection against bots. Throughout the years, as their use grew, techniques to defeat or bypass CAPTCHAs have continued to improve. Meanwhile, CAPTCHAs have also evolved in terms of sophistication and diversity, becoming increasingly difficult to solve for both bots (machines) and humans. Given this long-standing and still-ongoing arms race, it is critical to investigate how long it takes legitimate users to solve modern CAPTCHAs, and how they are perceived by those users. In this work, we explore CAPTCHAs in the wild by evaluating users' solving performance and perceptions of unmodified currently-deployed CAPTCHAs. We obtain this data through manual inspection of popular websites and user studies in which 1,400 participants collectively solved 14,000 CAPTCHAs. Results show significant differences between the most popular types of CAPTCHAs: surprisingly, solving time and user perception are not always correlated. We performed a comparative study to investigate the effect of experimental context -- specifically the difference between solving CAPTCHAs directly versus solving them as part of a more natural task, such as account creation. Whilst there were several potential confounding factors, our results show that experimental context could have an impact on this task, and must be taken into account in future CAPTCHA studies. Finally, we investigate CAPTCHA-induced user task abandonment by analyzing participants who start and do not complete the task.

Authors: Andrew Searles, Yoshimichi Nakatsuka, Ercan Ozturk, Andrew Paverd, Gene Tsudik, Ai Enkoji

Last Update: 2023-07-22 00:00:00

Language: English

Source URL: https://arxiv.org/abs/2307.12108

Source PDF: https://arxiv.org/pdf/2307.12108

Licence: https://creativecommons.org/licenses/by/4.0/

Changes: This summary was created with assistance from AI and may have inaccuracies. For accurate information, please refer to the original source documents linked here.

Thank you to arxiv for use of its open access interoperability.

More from authors

Similar Articles