The Essential Guide to Writing a NeurIPS Paper
Follow these key tips to successfully write and submit your NeurIPS paper.
Sam Griesemer, Defu Cao, Zijun Cui, Carolina Osorio, Yan Liu
― 6 min read
Table of Contents
- Claims and Contributions: What Are You Bringing to the Table?
- Theoretical Framework: Numbered Theorems Are Your Friends
- Experiment Reproducibility: Can Your Research Be Duplicated?
- Open Access: Share the Knowledge
- Experimenting with Details: Don’t Hold Back!
- Compute Resources: How Much Muscle Did You Need?
- Code of Ethics: Keep It Clean
- Broader Impacts: How Does Your Work Affect the World?
- Datasets and New Assets: Let’s Talk About the Goods
- Closing Thoughts: Let’s Make Science Fun!
- Original Source
- Reference Links
Writing a scientific paper can feel like preparing for a big exam. You want to make sure you check all the boxes, so you don’t get a failing grade, especially when your peers are the ones grading you. The NeurIPS conference has a set of guidelines that authors should follow to make sure their work is not only good but also clear and responsible. These guidelines are like a checklist for students on the first day of school: read, write, and don’t forget your lunch!
Claims and Contributions: What Are You Bringing to the Table?
When you write your paper, start strong. Clearly state what you are claiming as the main point of your work. Think of it as your elevator pitch—short, sweet, and to the point. You need to lay out what you’re contributing, any assumptions you’ve made, and any limits to your research. If you say you’re creating a new super-duper model for predicting the weather, then back it up with some actual data!
Don’t worry about aiming for the stars. It’s okay to mention dreamy goals as long as you’re honest about what you’ve actually achieved. If your paper doesn’t have any limitations, say “NA,” but if it does, be brave enough to discuss them. After all, pretending there are no bumps on the road can make your paper look suspicious, like a magician hiding a rabbit behind their back.
Theoretical Framework: Numbered Theorems Are Your Friends
Every argument or claim you make should be backed up with some solid theory. Number your theorems and formulas so nobody gets lost, like a friendly tour guide at a museum. This way, if someone wants to reference a particular theorem, they can find it without a treasure map. Make sure to clearly state any assumptions you have in your theorems to avoid confusion.
If you provide any informal proof in the main part of your paper, accompany it with a more formal proof in the appendix. It’s like giving your readers both a short story and a novel; some will appreciate the quick read, while others want the full epic.
Experiment Reproducibility: Can Your Research Be Duplicated?
What good is a magic trick if no one else can figure it out? In the world of research, reproducibility is key. If you conduct experiments, make sure to provide all necessary details so others can replicate your work. Even if you don’t include data or code, give clear instructions on how to achieve similar results.
As tempting as it might be to keep your secrets, transparency is the name of the game. Think of it like sharing your cookie recipe: you want your friends to enjoy those cookies too!
Open Access: Share the Knowledge
While you can shy away from sharing your code and data if it doesn’t directly relate to your work, providing access is a good idea. The more, the merrier, right? If you can, give clear instructions on how to access your data, as well as how to prepare it. If you’ve created a beautiful dataset, share it like it’s your prized family recipe.
And remember, if you’re including any data from the web, make sure to point out the source. We all love a good citation, especially when it comes with a license so that everyone knows how to use it. Sharing is caring!
Experimenting with Details: Don’t Hold Back!
When you’re sharing your experimental results, include enough detail so that even your grandma would understand. Explain your experimental settings and any statistical significance you found. It’s like telling a good story; you have to set the scene and reveal the outcome.
When sharing error bars or confidence intervals, make it clear how you calculated these. And if you throw in any fancy statistical terms, make sure to define them. Your readers will thank you.
Compute Resources: How Much Muscle Did You Need?
Let’s face it: every great study requires some serious computing power. Don’t hide the details! Let your audience know what kind of workers you used, whether they were CPUs or GPUs, and how much muscle you needed for each experimental run. Transparency is important, especially when someone else might want to redo your work.
Also, if you had some trials that didn’t make it into the paper, disclose that too. No one likes being ghosted, especially not when they’re trying to understand the full picture.
Code of Ethics: Keep It Clean
In a world where ethics should never take a back seat, make sure to adhere to the NeurIPS Code of Ethics. If you have any reasons for deviating from the norm, explain them. Think of this as waving a flag when you’re in a bit of murky water.
If your study involved human subjects or crowdsourcing workers, make sure they’re treated well. Paying them fairly is a must—and if this means you have to stretch your budget a bit, so be it!
Broader Impacts: How Does Your Work Affect the World?
Ask yourself: what effect does my research have on society? If your work could potentially cause harm, like generating fake news or compromising privacy, be honest about it. It’s a bit like realizing your invention could end up being used as a weapon; better to acknowledge the risks.
If you identify those risks, consider suggesting ways to mitigate them. You could even become the hero by implementing safeguards for your models or datasets. It’s better to be safe than sorry!
Datasets and New Assets: Let’s Talk About the Goods
If you’re using existing datasets, always give credit to the original sources. Include the version used and any licenses that apply, just like you would when borrowing a book from the library. If you’re creating a new dataset, tell people how it was obtained and whether consent was given. No one likes surprises!
When it comes to new assets like models or code, share the details through structured templates. Yes, it might sound tedious, but clarity is key.
Closing Thoughts: Let’s Make Science Fun!
In a nutshell, if you want to submit a paper to NeurIPS, keep it clear, honest, and accessible. Make your claims bold but backed by solid evidence. Share your work so that others can enjoy the fruits of your labor. And don’t forget, ethics matter; you want to be remembered as the researcher who played fair!
Now go forth and write, and remember: science isn’t just about numbers, formulas, and dry text. It’s about curiosity, discovery, and—dare we say—fun!
Original Source
Title: Active Sequential Posterior Estimation for Sample-Efficient Simulation-Based Inference
Abstract: Computer simulations have long presented the exciting possibility of scientific insight into complex real-world processes. Despite the power of modern computing, however, it remains challenging to systematically perform inference under simulation models. This has led to the rise of simulation-based inference (SBI), a class of machine learning-enabled techniques for approaching inverse problems with stochastic simulators. Many such methods, however, require large numbers of simulation samples and face difficulty scaling to high-dimensional settings, often making inference prohibitive under resource-intensive simulators. To mitigate these drawbacks, we introduce active sequential neural posterior estimation (ASNPE). ASNPE brings an active learning scheme into the inference loop to estimate the utility of simulation parameter candidates to the underlying probabilistic model. The proposed acquisition scheme is easily integrated into existing posterior estimation pipelines, allowing for improved sample efficiency with low computational overhead. We further demonstrate the effectiveness of the proposed method in the travel demand calibration setting, a high-dimensional inverse problem commonly requiring computationally expensive traffic simulators. Our method outperforms well-tuned benchmarks and state-of-the-art posterior estimation methods on a large-scale real-world traffic network, as well as demonstrates a performance advantage over non-active counterparts on a suite of SBI benchmark environments.
Authors: Sam Griesemer, Defu Cao, Zijun Cui, Carolina Osorio, Yan Liu
Last Update: 2024-12-07 00:00:00
Language: English
Source URL: https://arxiv.org/abs/2412.05590
Source PDF: https://arxiv.org/pdf/2412.05590
Licence: https://creativecommons.org/licenses/by/4.0/
Changes: This summary was created with assistance from AI and may have inaccuracies. For accurate information, please refer to the original source documents linked here.
Thank you to arxiv for use of its open access interoperability.