1
Unpacking the Expressed Consequences of AI Research in Broader Impact Statements
2
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜
3
Like a Researcher Stating Broader Impact For the Very First Time
5
The De-democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research
6
Against Scale: Provocations and Resistances to Scale Thinking
7
Utility Is in the Eye of the User: A Critique of NLP Leaderboard Design
8
The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity
9
Algorithmic Colonization of Africa
10
Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence
11
Don’t ask if artificial intelligence is good or fair, ask how it shifts power
12
Large image datasets: A pyrrhic win for computer vision?
13
Language (Technology) is Power: A Critical Survey of “Bias” in NLP
14
Performative Prediction
15
Race after technology. Abolitionist tools for the new Jim Code
16
An overview of the qualitative descriptive design within nursing research
17
Value-laden disciplinary shifts in machine learning
18
PyTorch: An Imperative Style, High-Performance Deep Learning Library
19
ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
20
XLNet: Generalized Autoregressive Pretraining for Language Understanding
21
A Unified Framework of Five Principles for AI in Society
22
Unlabeled Data Improves Adversarial Robustness
23
Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks
24
Defending Against Neural Fake News
25
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
26
Unified Language Model Pre-training for Natural Language Understanding and Generation
27
MASS: Masked Sequence to Sequence Pre-training for Language Generation
28
MixMatch: A Holistic Approach to Semi-Supervised Learning
29
Adversarial Examples Are Not Bugs, They Are Features
30
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
31
Adversarial Training and Robustness for Multiple Perturbations
32
Adversarial Training for Free!
33
On Exact Computation with an Infinitely Wide Neural Net
34
Algorithms of oppression: how search engines reinforce racism
35
NAS-Bench-101: Towards Reproducible Neural Architecture Search
36
Simplifying Graph Convolutional Networks
37
Wide neural networks of any depth evolve as linear models under gradient descent
38
Do ImageNet Classifiers Generalize to ImageNet?
39
Certified Adversarial Robustness via Randomized Smoothing
40
BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling
41
Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication
42
Adversarial Examples Are a Natural Consequence of Test Error in Noise
43
Fairness in representation: quantifying stereotyping as a representational harm
44
Error Feedback Fixes SignSGD and other Gradient Compression Schemes
45
Using Pre-Training Can Improve Model Robustness and Uncertainty
46
A Framework for Understanding Unintended Consequences of Machine Learning
47
Theoretically Principled Trade-off between Robustness and Accuracy
48
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
49
Cross-lingual Language Model Pretraining
50
Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?
51
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
52
Gradient Descent Finds Global Minima of Deep Neural Networks
53
A Convergence Theory for Deep Learning via Over-Parameterization
54
Video-to-Video Synthesis
55
Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data
56
Design Justice, A.I., and Escape from the Matrix of Domination
57
Troubling Trends in Machine Learning Scholarship
58
Glow: Generative Flow with Invertible 1x1 Convolutions
59
Hierarchical Graph Representation Learning with Differentiable Pooling
60
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
61
Neural Ordinary Differential Equations
62
How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift)
63
Self-Attention Generative Adversarial Networks
64
Data-Efficient Hierarchical Reinforcement Learning
65
Construction of the Literature Graph in Semantic Scholar
66
Black-box Adversarial Attacks with Limited Queries and Information
67
Gender Recognition or Gender Reductionism?: The Social Implications of Embedded Gender Recognition Systems
68
Adversarially Robust Generalization Requires More Data
69
Adversarial Logit Pairing
70
Addressing Function Approximation Error in Actor-Critic Methods
71
Disentangling by Factorising
72
Stronger generalization bounds for deep nets via a compression approach
73
Isolating Sources of Disentanglement in Variational Autoencoders
74
Efficient Neural Architecture Search via Parameter Sharing
75
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
76
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
77
PointCNN: Convolution On X-Transformed Points
78
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
79
Which Training Methods for GANs do actually Converge?
80
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
81
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
82
Do Artifacts Have Politics?
83
Inter-Coder Agreement in One-to-Many Classification: Fuzzy Kappa
84
Use of positive and negative words in scientific PubMed abstracts between 1974 and 2014: retrospective analysis
85
How novelty in knowledge earns recognition: The role of consistent identities
86
Issues of validity and reliability in qualitative research
87
When good isn't good enough.
88
Why Science Is Not Necessarily Self-Correcting
89
The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research
90
Machine Learning that Matters
91
Nonparametric Latent Feature Models for Link Prediction
92
Rethinking LDA: Why Priors Matter
93
Learning Non-Linear Combinations of Kernels
94
Measuring Invariances in Deep Networks
95
Efficient Large-Scale Distributed Training of Conditional Maximum Entropy Models
96
3D Object Recognition with Deep Belief Nets
97
Kernel Methods for Deep Learning
98
Replicated Softmax: an Undirected Topic Model
100
Guaranteed Rank Minimization via Singular Value Projection
101
Online dictionary learning for sparse coding
102
An accelerated gradient method for trace norm minimization
103
Learning structural SVMs with latent variables
104
Group lasso with overlap and graph lasso
105
Large-scale deep unsupervised learning using graphics processors
106
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations
107
Multi-view clustering via canonical correlation analysis
109
Learning with structured sparsity
110
Feature hashing for large scale multitask learning
111
Multi-Label Prediction via Compressed Sensing
112
Deflation Methods for Sparse PCA
113
On the Complexity of Linear Prediction: Risk Bounds, Margin Bounds, and Regularization
114
Translated Learning: Transfer Learning across Different Feature Spaces
115
Online Metric Learning and Fast Similarity Search
116
Privacy-preserving logistic regression
117
Nonrigid Structure from Motion in Trajectory Space
118
Local Gaussian Process Regression for Real Time Online Model Learning
119
The Recurrent Temporal Restricted Boltzmann Machine
120
Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity
121
Domain Adaptation with Multiple Sources
122
Clustered Multi-Task Learning: A Convex Formulation
123
Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning
124
Grassmann discriminant analysis: a unifying view on subspace-based learning
125
Listwise approach to learning to rank: theory and algorithm
126
On the quantitative analysis of deep belief networks
127
Classification using discriminative restricted Boltzmann machines
128
Efficient projections onto the l1-ball for learning in high dimensions
129
Learning diverse rankings with multi-armed bandits
130
A dual coordinate descent method for large-scale linear SVM
131
Confidence-weighted linear classification
132
Bayesian probabilistic matrix factorization using Markov chain Monte Carlo
133
Extracting and composing robust features with denoising autoencoders
134
Training restricted Boltzmann machines using approximations to the likelihood gradient
135
A unified architecture for natural language processing: deep neural networks with multitask learning
136
Three Approaches to Qualitative Content Analysis
137
Understanding interobserver agreement: the kappa statistic.
138
Sorting Things Out: Classification and Its Consequences
139
Enhancing the quality and credibility of qualitative analysis.
141
Qualitative evaluation and research methods
142
Rigor in qualitative research: the assessment of trustworthiness.
144
Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought
145
The Sociology of Science: Theoretical and Empirical Investigations
146
On the impact of the computer on society.
147
The Problem of $m$ Rankings
149
Design Justice, AI, and Escape From the Matrix of Domination
150
A Retrospective on the NeurIPS
151
Indigenous Protocol and Artificial Intelligence Position Paper
152
Wrongfully Accused by an Algorithm
153
Peer review in NLP: reject-if-not-SOTA
154
The Values of Machine Learning
155
Digital defense playbook: Community power tools for reclaiming data
156
Rise of the robots: Are you ready? Financial Times Magazine (March 2018)
158
How to plan and perform a qualitative study using content analysis
159
The Moral Character of Cryptographic Work
161
Sorting Things Out - Classification and Its Consequences
162
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
163
Presenter : HMW Category : graphical models Preference : Oral Evaluation Methods for Topic Models
164
Issues of validity and reliability in qualitative research
165
From Computer Power and Human Reason From Judgment to Calculation
166
Qualitative research practice
167
The discovery of grounded theory: strategies for grounded research
168
COGNITIVE AND NON-COGNITIVE VALUES IN SCIENCE: RETHINKING THE DICHOTOMY'
169
Qualitative Research Methods for the Social Sciences
170
Content Analysis: An Introduction to Its Methodology
171
Note that due to minor errors in the data sources used, the distribution of papers over venues and years is not perfectly balanced
172
Sociological Methods: A Sourcebook
174
Objectivity, Value Judgment, and Theory Choice
175
b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] Included in Appendix
176
speed can be described as valuable in an antelope [44]
177
Did you state the full set of assumptions of all theoretical results
178
Did you discuss any potential negative societal impacts of your work? [Yes] Included in the Appendix
179
Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation
180
Did you discuss whether and how consent was obtained from people whose data you're using/curating?
181
Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
182
(a) If your work uses existing assets, did you cite the creators? [Yes] Full listing of annotated papers is given in
183
Have you read the ethics review guidelines and ensured that your paper conforms to them
184
code, data, models) or curating/releasing new assets... (a) If your work uses existing assets
185
Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
186
If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots
187
Did you include any new assets either in the supplemental material or as a URL? [Yes] Included in supplementary zipfile
188
(a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes
189
Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes
190
Did you report error bars (e.g., with respect to the random seed after running experiments multiple times