Simple Science

Cutting edge science explained simply

# Computer Science# Machine Learning# Computers and Society

Insights from the Learning on Graphs Conference Survey

A survey reveals thoughts on the first Learning on Graphs Conference.

― 5 min read


Learning on GraphsLearning on GraphsConference Insightsand suggestions for improvement.Survey results highlight experiences
Table of Contents

Machine learning conferences are becoming larger, and the methods used to review submissions are getting more complicated. To understand how these conferences work and gather opinions from different participants, a survey was conducted at the first "Learning on Graphs" Conference.

This conference, which took place from December 9 to 12, 2022, aimed to create a key gathering place for research on graph learning. It relied on advice from recognized experts in the field and focused on providing high-quality reviews. To improve the review process, three main strategies were put in place:

  1. Monetary rewards for top Reviewers.
  2. Fewer papers assigned to each reviewer compared to other conferences.
  3. A commitment to transparency and quality that encourages community engagement.

The Authors of the survey wanted to see if these strategies worked. They decided that a large survey would help gather insights on the experiences of authors, reviewers, and area chairs at the conference. While many conferences conduct surveys, few lead to real changes in how they are run.

The aim of this report is to encourage the community to reflect on these experiences, allowing for improved transparency and a re-evaluation of existing processes. As the research communities grow, the way conferences are organized must also adapt. We cannot apply the same methods used in the past to a larger group today.

Related Work

Other conferences, like NeurIPS 2021, have already conducted surveys to assess the reviewing process, highlighting issues and encouraging authors not to feel too discouraged by rejections. However, larger conferences face challenges in making significant changes. This is because their program committees change every year, making it hard to transfer knowledge.

In contrast, the "Learning on Graphs" Conference is smaller and newer, which makes it easier to maintain consistency in decisions. The authors hope their findings inspire other conferences to examine their own processes critically.

Survey Overview

To understand how people experienced the conference, a survey was sent out to all authors, reviewers, and area chairs registered through OpenReview from late November 2022 to mid-February 2023. The survey primarily used closed questions with options for participants to express their views.

The response data and analysis methods used were made available to ensure transparency. The Feedback gathered will help assess how well the conference functions and highlight areas for improvement.

Sample Composition

The survey was distributed to numerous active submissions, and responses were received from a good number of participants. The results indicate that many authors, reviewers, and area chairs took the time to share their thoughts and experiences.

Questions to Authors

Overall, most authors reported Satisfaction with the conference and the reviews they received. However, their feelings about the rebuttal phase-the time when authors could respond to reviews-were mixed but still leaned positive.

Authors also felt that the quality of reviews met or exceeded standards seen in comparable conferences. Given that this was the first edition of the conference, this is a noteworthy outcome that confirms the quality of the review process. Many authors noted that their overall experience at this conference was at least as good, if not better, than at other conferences.

Questions to Reviewers

Reviewers reported that their main motivation for taking part in this conference was interest in the topic. They also appreciated the financial rewards for good reviews and found the workload manageable.

Most reviewers expressed satisfaction with the review process and the rebuttal phase, stating that it was on par with their experiences at other established conferences. Reviewers noted that the workload was lower than what they typically faced at other events.

Questions to Area Chairs

Despite a small number of area chairs participating, those who did provide feedback were generally satisfied with their overall review experience. They also felt that their workload was manageable compared to other conferences.

General Feedback

Participants were given the option to discuss the current setup of the conference and share open-ended feedback. Many wished for more time to engage with reviewers and suggested that a better culture of review is needed. Some expressed concerns about the demands placed on them by reviewers, pointing out that some requests seemed irrelevant or out of scope.

Several respondents suggested allowing public comments on submissions to further involve the community in the process. There were also comments highlighting a positive experience compared to other conferences.

Extended Abstract Track

The conference had a unique track for extended abstracts. Participants shared mixed feelings about it. Many appreciated the chance to submit early or preliminary work to receive quick feedback but were concerned about perceived gaps in quality. Reviewers seemed to have high standards for this track, making acceptance challenging.

Conclusions and Future Directions

The initial responses to the "Learning on Graphs" Conference show positive engagement and a willingness to improve the process further. Many participants felt that their experiences were among the best they have had in past conferences, with some noting particularly high reviewer standards.

However, there were also areas highlighted for improvement, especially involving communication between authors and reviewers. In some cases, there were papers that received no feedback, which suggests an opportunity for better engagement.

Moving forward, the conference organizers plan to maintain the vetting process for reviewers and keep the quality of feedback high. They also want to monitor the review process more closely to identify and address any communication breakdowns early on.

The concept of a reviewer reputation system was also mentioned as a way to improve accountability and promote good practices among reviewers. While this idea has its complexities, the benefits could be significant in elevating the quality of reviews across the community.

This feedback from the first conference will serve as a foundation for the next edition and beyond, with hopes of finding solutions to ongoing challenges while nurturing a supportive community for researchers. As the conference grows, they will seek to refine and enhance their processes in a way that benefits all participants-authors, reviewers, and area chairs alike.

More from authors

Similar Articles