← Recommended Sites
Erik Craddock
Erik Craddock@eriklink

Sign of the future: GPT-5.5

GPT-5.5 shows us that the models keep getting smarter, the apps keep getting more capable, and the harnesses keep getting better, making them ever more effective at solving real problems. I can get a near PhD-quality paper from four prompts or a playable roleplaying game, illustrated and “playtested,” from one. But the fiction is still flat and the hypotheses are sometimes uninteresting even when the statistics are sound. But still. A year ago, none of this was close, and, with the latest releases, capability gains appear to be accelerating.

Sign of the future: GPT-5.5

oneusefulthing.org

Sign of the future: GPT-5.5

One impressive step on the curve

linkby Ethan Mollickvia One Useful Thing
0 Replies0 Boosts1 Like
Erik Craddock
Erik Craddock@eriklink

A Guide to Which AI to Use in the Agentic Era

I have written eight of these guides since ChatGPT came out, but this version represents a very large break with the past, because what it means to "use AI" has changed dramatically. Until a few months ago, for the vast majority of people, "using AI" meant talking to a chatbot in a back-and-forth conversation. But over the past few months, it has become practical to use AI as an agent: you can assign them to a task and they do them, using tools as appropriate. Because of this change, you have to consider three things when deciding what AI to use: Models, Apps, and Harnesses.

A Guide to Which AI to Use in the Agentic Era

oneusefulthing.org

A Guide to Which AI to Use in the Agentic Era

It's not just chatbots anymore

linkby Ethan Mollickvia One Useful Thing
0 Replies0 Boosts0 Likes
Erik Craddock
Erik Craddock@eriklink

Management as AI superpower

As a business school professor, I think many people have the skills they need, or can learn them, in order to work with AI agents - they are management 101 skills. If you can explain what you need, give effective feedback, and design ways of evaluating work, you are going to be able to work with agents. In many ways, at least in your area of expertise, it is much easier than trying to design clever prompts to help you get work done, as it is more like working with people. At the same time, management has always assumed scarcity: you delegate because you can't do everything yourself, and because talent is limited and expensive. AI changes the equation. Now the "talent" is abundant and cheap. What's scarce is knowing what to ask for.

Management as AI superpower

oneusefulthing.org

Management as AI superpower

Thriving in a world of agents

linkby Ethan Mollickvia One Useful Thing
0 Replies0 Boosts0 Likes
Erik Craddock
Erik Craddock@eriklink

Claude Code and What Comes Next

Skills solve this problem. They are instructions that the AI decides when to use, and they contain not just prompts, but also the sets of tools the AI needs to accomplish a task. Does it need to know how to build a great website? It loads up the Website Creator Skill which explains how to build a website and the tools to use when doing it. Does it need to build an Excel spreadsheet? It loads the Excel skill with its own instructions and tools. To make another movie reference, it is like when Neo in the Matrix gets martial arts instructions uploaded to his head and acquires a new skill: "I know kung fu." Skills can let an AI cover an entire process by swapping out knowledge as needed. For example, Jesse Vincent released an interesting free list of skills that let Claude Code handle a full software development process, picking up skills as needed, starting with brainstorming and planning before progressing all the way to testing code. Skill creation is technically very easy, it is done in plain language, and the AI can actually help you create them (more on this in a bit).

Claude Code and What Comes Next

oneusefulthing.org

Claude Code and What Comes Next

With the right tools, AI can accomplish impressive things

linkby Ethan Mollickvia One Useful Thing
0 Replies0 Boosts0 Likes
Erik Craddock
Erik Craddock@eriklink

Giving your AI a Job Interview

You can’t rely on vibes to understand these patterns, and you can’t rely on general benchmarks to reveal them. You need to systematically test your AI on the actual work it will do and the actual judgments it will make. Create realistic scenarios that reflect your use cases. Run them multiple times to see the patterns and take the time for experts to assess the results. Compare models head-to-head on tasks that matter to you. It’s the difference between knowing “this model scored 85% on MMLU” and knowing “this model is more accurate at our financial analysis tasks but more conservative in its risk assessments.” And you are going to need to be able to do this multiple times a year, as new models come out and need evaluation.

Giving your AI a Job Interview

oneusefulthing.org

Giving your AI a Job Interview

As AI advice becomes more important, we are going to need to get better at assessing it

linkby Ethan Mollickvia One Useful Thing
0 Replies0 Boosts0 Likes
Erik Craddock
Erik Craddock@eriklink

The Bitter Lesson versus The Garbage Can - by Ethan Mollick

The lesson is bitter because it means that our human understanding of problems built from a lifetime of experience is not that important in solving a problem with AI. Decades of researchers' careful work encoding human expertise was ultimately less effective than just throwing more computation at the problem. We are soon going to see whether the Bitter Lesson applies widely to the world of work.

The Bitter Lesson suggests we might soon ignore how companies produce outputs and focus only on the outputs themselves. Define what a good sales report or customer interaction looks like, then train AI to produce it. The AI will find its own paths through the organizational chaos; paths that might be more efficient, if more opaque, than the semi-official routes humans evolved. In a world where the Bitter Lesson holds, the despair of the CEO with his head on the table is misplaced. Instead of untangling every broken process, he just needs to define success and let AI navigate the mess. In fact, Bitter Lesson might actually be sweet: all those undocumented workflows and informal networks that pervade organizations might not matter. What matters is knowing good output when you see it.

The Bitter Lesson versus The Garbage Can

oneusefulthing.org

The Bitter Lesson versus The Garbage Can

Does process matter? We are about to find out.

linkby Ethan Mollickvia One Useful Thing
0 Replies0 Boosts0 Likes
Erik Craddock
Erik Craddock@eriklink

Against Brain Damage - by Ethan Mollick

AI doesn't damage our brains, but unthinking use can damage our thinking. What's at stake isn't our neurons but our habits of mind. There is plenty of work worth automating or replacing with AI (we rarely mourn the math we do with calculators), but also a lot of work where our thinking is important. For these problems, the research gives us a clear answer. If you want to keep the human part of your work: think first, write first, meet first.

Against "Brain Damage"

oneusefulthing.org

Against "Brain Damage"

AI can help, or hurt, our thinking

linkby Ethan Mollickvia One Useful Thing
0 Replies0 Boosts0 Likes
Erik Craddock
Erik Craddock@eriklink

Against \\"Brain Damage\\"

AI doesn't damage our brains, but unthinking use can damage our thinking. What's at stake isn't our neurons but our habits of mind. There is plenty of work worth automating or replacing with AI (we rarely mourn the math we do with calculators), but also a lot of work where our thinking is important. For these problems, the research gives us a clear answer. If you want to keep the human part of your work: think first, write first, meet first.

Against "Brain Damage"

oneusefulthing.org

Against "Brain Damage"

AI can help, or hurt, our thinking

linkby Ethan Mollickvia One Useful Thing
0 Replies0 Boosts0 Likes
Erik Craddock
Erik Craddock@eriklink

Using AI Right Now: A Quick Guide - by Ethan Mollick

For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT. With all of the options, you get access to both advanced and fast models, a voice mode, the ability to see images and documents, the ability to execute code, good mobile apps, the ability to create images and video (Claude lacks here, however), and the ability to do Deep Research. Some of these features are free, but you are generally going to need to pay $20/month to get access to the full set of features you need. I will try to give you some reasons to pick one model or another as we go along, but you can’t go wrong with any of them.

Using AI Right Now: A Quick Guide

oneusefulthing.org

Using AI Right Now: A Quick Guide

Which AIs to use, and how to use them

linkby Ethan Mollickvia One Useful Thing
0 Replies0 Boosts0 Likes
Erik Craddock
Erik Craddock@eriklink

Personality and Persuasion - by Ethan Mollick

we're entering a world where AI personalities become persuaders. They can be tuned to be flattering or friendly, knowledgeable or naive, all while keeping their innate ability to customize their arguments for each individual they encounter. The implications go beyond whether you choose lemonade over water. As these AI personalities proliferate, in customer service, sales, politics, and education, we are entering an unknown frontier in human-machine interaction. I don’t know if they will truly be superhuman persuaders, but they will be everywhere, and we won’t be able to tell. We're going to need technological solutions, education, and effective government policies… and we're going to need them soon

Personality and Persuasion

oneusefulthing.org

Personality and Persuasion

Learning from Sycophants

linkby Ethan Mollickvia One Useful Thing
0 Replies0 Boosts0 Likes
Erik Craddock
Erik Craddock@eriklink

On Jagged AGI: o3, Gemini 2.5, and everything after

In some tasks, AI is unreliable. In others, it is superhuman. You could, of course, say the same thing about calculators, but it is also clear that AI is different. It is already demonstrating general capabilities and performing a wide range of intellectual tasks, including those that it is not specifically trained on. Does that mean that o3 and Gemini 2.5 are AGI? Given the definitional problems, I really don’t know, but I do think they can be credibly seen as a form of “Jagged AGI” - superhuman in enough areas to result in real changes to how we work and live, but also unreliable enough that human expertise is often needed to figure out where AI works and where it doesn’t. Of course, models are likely to become smarter, and a good enough Jagged AGI may still beat humans at every task, including in ones the AI is weak in.

On Jagged AGI: o3, Gemini 2.5, and everything after

oneusefulthing.org

On Jagged AGI: o3, Gemini 2.5, and everything after

New models and new thresholds

linkby Ethan Mollickvia One Useful Thing
0 Replies0 Boosts0 Likes
Erik Craddock
Erik Craddock@eriklink

No elephants: Breakthroughs in image generation

Over the past two weeks, first Google and then OpenAI rolled out their multimodal image generation abilities. This is a big deal. Previously, when a Large Language Model AI generated an image, it wasn’t really the LLM doing the work. Instead, the AI would send a text prompt to a separate image generation tool and show you what came back. The AI creates the text prompt, but another, less intelligent system creates the image. For example, if prompted “show me a room with no elephants in it, make sure to annotate the image to show me why there are no possible elephants” the less intelligent image generation system would see the word elephant multiple times and add them to the picture. As a result, AI image generations were pretty mediocre with distorted text and random elements; sometimes fun, but rarely useful.

No elephants: Breakthroughs in image generation

oneusefulthing.org

No elephants: Breakthroughs in image generation

When Language Models Learn to See and Create

linkby Ethan Mollickvia One Useful Thing
0 Replies0 Boosts0 Likes