OpenAI’s policies hinder reproducible research on language modelsLLMs have become privately-controlled research infrastructure
GPT-4 and professional benchmarks: the wrong answer to the wrong questionOpenAI may have tested on the training data. Besides, human benchmarks are meaningless for bots.
What is algorithmic amplification and why should we care? A symposium and a primer on social media recommendation algorithms
Share this post
What is algorithmic amplification and why should we care?
Artists can now opt out of generative AI. It’s not enough.Opting out is the latest example of generative AI developers externalizing costs.
The LLaMA is out of the bag. Should we expect a tidal wave of disinformation?The bottleneck isn't the cost of producing disinfo, which is already very low.
AI cannot predict the future. But companies keep trying (and failing).A new paper on how AI companies make false promises and how we can challenge them
People keep anthropomorphizing AI. Here’s whyCompanies and journalists both contribute to the confusion
Four more things we worked on in 2022We had a busy 2022. Here are a few things we worked on but didn’t cover here.
Share this publication
AI Snake Oil
AI Snake Oil
What AI *can't* do
Over 5,000 subscribers
© 2023 Sayash Kapoor and Arvind Narayanan
Substack is the home for great writing