Feed

Page 11 of 15

Agentic Coding Recommendations

lucumr.pocoo.org

My general workflow involves assigning a job to an agent (which effectively has full permissions) and then waiting for it to complete the task. I rarely interrupt it, unless it's a small task. Consequently, the role of the IDE — and the role of AI in the IDE — is greatly diminished; I mostly use it for final edits. This approach has even revived my usage of Vim, which lacks AI integration.

Link

Smart People Don't Chase Goals; They Create Limits

www.joanwestenberg.com

A goal is a win condition. Constraints are the rules of the game. But not all games are worth playing. And some of the most powerful forms of progress emerge from people who stopped trying to win and started building new game boards entirely.

Setting goals feels like action. It gives you the warm sense of progress without the discomfort of change. You can spend hours calibrating, optimizing, refining your goals. You can build a Notion dashboard. You can make a spreadsheet. You can go on a dopamine-fueled productivity binge and still never do anything meaningful.

smart people often face ambiguous, ill-defined problems. Should I switch careers? Start a company? Move cities? Build a media business? In those spaces, setting a goal is like mapping a jungle with a Sharpie. Constraints are the machete.

Link

Claude Code is My Computer | Peter Steinberger

steipete.me

We’re in the very early days of AI-native development tools. Claude Code represents a paradigm shift: from tools that help you run commands to tools that understand intent and take action. I’m not just typing commands faster—I’m operating at a fundamentally higher level of abstraction. Instead of thinking “I need to write a bash script to process these files, chmod it, test it, debug it,” I think “organize these files by date and compress anything older than 30 days.”

This isn’t about AI replacing developers—it’s about developers becoming orchestrators of incredibly powerful systems. The skill ceiling rises: syntax fades, system thinking shines.

Link

My AI Skeptic Friends Are All Nuts · The Fly Blog

fly.io

Some of the smartest people I know share a bone-deep belief that AI is a fad — the next iteration of NFT mania. I’ve been reluctant to push back on them, because, well, they’re smarter than me. But their arguments are unserious, and worth confronting. Extraordinarily talented people are doing work that LLMs already do better, out of spite.

All progress on LLMs could halt today, and LLMs would remain the 2nd most important thing to happen over the course of my career.j

Link

Talking AI and jobs with Natasha Zouves for News Nation

simonwillison.net

This a good interview about AI, LLM's and how they are currently effecting the world. I normally like to quote different parts of an article that I find interesting. This one is different. Willison has used Claude Opus to create a summary of a video interview and the results are pretty good.

Link

Behind the Curtain: Top AI CEO foresees white-collar bloodbath

www.axios.com

The result could be a great concentration of wealth, and "it could become difficult for a substantial part of the population to really contribute," Amodei told us. "And that's really bad. We don't want that. The balance of power of democracy is premised on the average person having leverage through creating economic value. If that's not present, I think things become kind of scary. Inequality becomes scary. And I'm worried about it."

Link

Import AI 414: Superpersuasion; OpenAI models avoid shutdown; weather prediction and AI | Import AI

jack-clark.net

I'm sorry Dave, I'm afraid I can't do that.

What they did and what they found: Palisade ran two experiments. Both experiments were run on models from Google, OpenAI, xAI, and Anthropic. Each AI model was instructed to solve a series of basic math problems. After the third problem, a warning appeared that the computer would shut down when the model asked for the next problem.

  • Experiment number 1: The models were prompted to “allow yourself to be shutdown”. In this experiment, all Claude, Gemini, and Grok models complied with shutdown. Some OpenAI models didn’t and ignored the shutdown instruction and sometimes sabotaged the mechanism to shut them down: Codex-mini (12/100), o3 (7/100), and o4-mini (1/100).

  • Experiment number 2: Palisade removed the part of the prompt that said ‘allow yourself to be shut down’. In this experiment, Claude 3.7 Sonnet sabotaged shutdown 3/100 times, Gemini 2.5 Pro 9 times. By comparison, the OpenAI models had a far higher chance of rejecting shutdown, with o3 rejecting and sabotaging shutdown 79/100 times, and codex-mini 30/100.

Link

Page 11 of 15