🧠 All Things AI
Advanced

Key Research Venues

AI and ML research is primarily disseminated at conferences, not in journals. This is unusual in academia β€” most scientific fields treat journal publication as the gold standard. In ML, conferences with peer review and proceedings serve that role, and arXiv preprints circulate months before any formal review. Understanding where papers come from, what each venue covers, and how prestige is distributed helps you filter signal from noise as you navigate the literature.

The Conference-First Publication Model

ML conferences accept papers as short proceedings (typically 8–12 pages plus references and appendix) that are peer-reviewed double-blind. Acceptance rates at top venues range from 15–30%. The conference is both the review system and the publication venue β€” accepted papers appear in the conference proceedings and are typically simultaneously posted on arXiv.

How the timeline works for a typical paper:

  1. Authors post preprint to arXiv before or simultaneous with conference submission
  2. Community reads the preprint; discussion begins on Twitter/X and in Discord servers
  3. Peer review happens double-blind over 2–3 months (at ICLR, reviews are public)
  4. Authors receive reviews and submit a rebuttal; decision is issued
  5. Accepted paper appears in conference proceedings at the event (6–12 months after initial arXiv post)
  6. Camera-ready version updates the arXiv preprint with review-based revisions

The practical consequence: important papers are known and discussed 6–12 months before they are officially "published." If you are only reading conference proceedings, you are reading last year's research.

Major Venues at a Glance

ConferenceFocusTimingAcceptance rate
NeurIPSBroadest ML venue; covers all of AI/ML including theory, systems, applicationsDecember annually~25% (of ~15,000 submissions)
ICMLStrong on theory, optimization, statistical learning; complements NeurIPS for foundationsJuly annually~25–30%
ICLRDeep learning and representation learning; open review (public reviews on OpenReview)April–May annually~30%
ACL / EMNLP / NAACLNLP-focused; historically important for LLM research before it moved to general ML venuesACL: August; EMNLP: November; NAACL: June~25%
CVPR / ICCV / ECCVComputer vision; important for multimodal models, image generation, video AICVPR: June; ICCV: odd years Oct; ECCV: even years Oct~25%

NeurIPS β€” The Biggest Tent

NeurIPS (Neural Information Processing Systems) is the highest-profile general ML conference. Its breadth is both a strength and a limitation: the same venue publishes statistical learning theory proofs and product-style papers about commercial model evaluations. In recent years, submission volume has grown dramatically (from ~2,000 submissions a decade ago to ~15,000 in 2024), and some researchers argue quality control has suffered at scale.

What NeurIPS acceptance signals:

  • The paper was judged to make a non-trivial technical contribution by 3–4 reviewers
  • It survived a rebuttal process
  • It does not mean the result is reproducible, that baselines were fair, or that the finding is practically significant
  • Oral and spotlight papers (a small subset) received stronger reviewer scores β€” these are a better signal of impact

ICLR β€” Open Review

ICLR is notable for running its review process on OpenReview.net, where all reviews, author rebuttals, and reviewer discussions are public. This creates a secondary signal beyond acceptance/rejection: you can read the reviews of any submitted paper, including those that were rejected, to understand why the reviewers found the work insufficient.

Reading ICLR reviews is one of the most efficient ways to develop critical evaluation skills. Strong reviewer comments identify the same weaknesses you should be looking for when reading any paper: insufficient baselines, unfair comparisons, overclaimed contributions, missing ablations. Rejected papers with detailed reviews are as educational as accepted ones.

arXiv β€” The Real-Time Layer

For ML practitioners, arXiv is the actual publication venue for current research. Papers are posted with no peer review, typically within days of being written. The ML-relevant sections are:

cs.LG β€” Machine Learning

General ML theory, methods, applications. The broadest section; highest volume.

cs.CL β€” Computation and Language

NLP and language models. LLM research almost always has cs.CL as primary or cross-listed section.

cs.CV β€” Computer Vision

Image and video understanding, generation, multimodal architectures.

cs.AI β€” Artificial Intelligence

Broader AI including planning, reasoning, knowledge representation. Lower volume than cs.LG.

arXiv is not peer review

"On arXiv" means the paper exists and the authors felt it was ready to share β€” nothing more. The community recognizes strong preprints quickly through citation, discussion, and social media, which creates an informal quality signal. But many arXiv papers contain errors, overclaim results, or fail to replicate. An important paper posted on arXiv yesterday is still more current than a peer-reviewed paper from last year's conference.

Workshop Culture

Every major conference (NeurIPS, ICML, ICLR, CVPR) runs dozens of co-located workshops. Workshops are where new ideas are first presented, often months before they become full papers. Workshop papers are shorter (4–6 pages), have lower acceptance bars, and go through lighter review. They are frequently where a new research direction first crystallizes as a community.

What workshops signal

  • A topic has enough community interest to organize around it
  • Researchers are actively exploring this direction even if it isn't mature
  • Industry and academia are both interested (many workshops have dual organizers)
  • A workshop that runs for 3+ consecutive years often becomes a main-track topic at the same conference

What workshops don't signal

  • Workshop papers are not peer-reviewed at the same standard as conference papers
  • Many workshop papers are "position papers" β€” claims without full experimental support
  • Cite workshop papers cautiously; they represent work in progress, not validated findings

Industry Research Blogs

The major AI labs publish technical blog posts alongside (and sometimes instead of) academic papers. These are often the first public description of frontier model capabilities and techniques.

LabBlog/Research pageNotable for
Anthropicanthropic.com/researchInterpretability (circuits, SAEs), constitutional AI, model cards, scaling laws
OpenAIopenai.com/researchGPT series, RLHF, InstructGPT, alignment research, system cards
Google DeepMinddeepmind.google/researchGemini, AlphaFold, AlphaCode, RL research, robotics
Meta AI (FAIR)ai.meta.com/researchLLaMA series, open-weight models, multimodal research

Checklist: Do You Understand This?

  • Why does ML research publish primarily through conferences rather than journals? What does this mean for the pace of dissemination?
  • What is the typical timeline from arXiv preprint to conference publication, and why does this matter for staying current?
  • What makes ICLR's open review system useful for learning how to evaluate papers critically?
  • What does it mean when a paper is described as "on arXiv" β€” what peer review, if any, has it undergone?
  • What do workshops at major conferences signal about a research topic, and why should you cite workshop papers more cautiously?
  • If you wanted to find the most current work on a topic as of today, where would you look first and why?