Feed

Page 3 of 15

Move Faster

blog.sshh.io

With general intelligence, you can go a layer deeper: you can accelerate the acceleration. You don't just write the prompt that fixes the code; you build the evaluation pipeline that automatically optimizes the prompts. You stop working on the work, and start working on the optimization of the work. You shift from First-Order execution (doing the thing), to Second-Order automation (improving the system), to Third-Order meta-optimization (automating the improvement of the system). AI eats the lower derivatives, constantly pushing you up the stack to become the architect of the machine that builds the machine.

You can't leave anything on the table. This is Amdahl's Law for AI transformation: as the "core" work approaches zero duration, the "trivial" manual steps you ignored—the 10-minute deploy, the manual data entry on a UI, the waiting for CI—become the entire bottleneck. The speed of your system is no longer determined by how fast you code, but by the one thing you didn't automate5. If an agent can fix a bug in 5 minutes but it takes 3 days for Security to review the text or 2 days for Design to approve the padding, the organization has become the bug. You need to treat organizational latency with the same severity you treat server latency.

Link

Your App Subscription Is Now My Weekend Project

rselbach.com

I'm still skeptical of vibecoding in general. As I mentioned above, I would not trust my vibecoding enough to make these into products. If something goes wrong, I don't know how to fix it. Maybe my LLM friends can, but I don't know. But vibecoding is 100% viable for personal stuff like this: we now have apps on demand.

Link

The Code-Only Agent

rijnard.com

I expect that the following is going to get worse before it gets better.

Many of us got hit by the agent coding addiction. It feels good, we barely sleep, we build amazing things. Every once in a while that interaction involves other humans, and all of a sudden we get a reality check that maybe we overdid it. The most obvious example of this is the massive degradation of quality of issue reports and pull requests. As a maintainer many PRs now look like an insult to one's time, but when one pushes back, the other person does not see what they did wrong. They thought they helped and contributed and get agitated when you close it down.

I often feel like this as well. I think however that the truth is that what we now consider code quality isn't as important as we think it is.

Or maybe some of us are genuinely losing the plot, and we won't know which camp we're in until we look back. All I know is that when I watch someone at 3am, running their tenth parallel agent session, telling me they've never been more productive — in that moment I don't see productivity. I see someone who might need to step away from the machine for a bit. And I wonder how often that someone is me.

I'm more worried about the need to step away from the computer. Not because of the code quality issue but more from the addiction part of it. So far there is always some new feature, some new tool or some new idea. The kind of thought process it takes to dream up what to do is less daunting than writing the code by hand. I personally have to make myself stop. I don't really get worn out working this way. I get sleepy.

Link

From passwords to passkeys

ssg.dev

1994: The web browser Netscape Navigator introduces encrypted HTTPS (HTTP over SSL) protocol that shows a warning when used, so people can make an informed decision to be secure or not by doing their own research.

Link

Import AI 440: Red queen AI; AI regulating AI; o-ring automation

jack-clark.net

The world is going to look a lot like Core Wars – millions of AI agents will be competing against one another in a variety of domains, ranging from cybersecurity to economics, and will be optimizing themselves in relation to achieving certain competitive criteria. The result will be sustained, broad evolution of AI systems and the software harnesses and tooling they use to get stuff done. This means that along with human developers and potential AI-designed improvements, we'll also see AI systems improve from this kind of broad competitive pressure.

Jobs go away, but humans don't: Another way to put this is, when a task gets automated it's not like the company in question suddenly fires all the people doing that job. Consider ATMs and banking – yes, the 'job' of doling out cash rapidly transitioned from people to machines, but it's not like the company fired all tellers – rather, the companies and the tellers transitioned the work to something else: "Under a separable task model, this [widespread deployment of ATMs doing cash-handling tasks] should have produced sharp displacement," they write. "Yet teller employment did not collapse; rather, the occupation shifted toward "relationship banking" and higher-value customer interaction".

Link

2026 is the Year of Self-hosting

fulghum.io

This is the way. I've been doing something similar with my homelab. I built an entire documenation repo that does nothing but tell the clanker how to manage my homelab.

Don't try this at home... er actually only try this at home.

Link

Claude Code and What Comes Next

www.oneusefulthing.org

Skills solve this problem. They are instructions that the AI decides when to use, and they contain not just prompts, but also the sets of tools the AI needs to accomplish a task. Does it need to know how to build a great website? It loads up the Website Creator Skill which explains how to build a website and the tools to use when doing it. Does it need to build an Excel spreadsheet? It loads the Excel skill with its own instructions and tools. To make another movie reference, it is like when Neo in the Matrix gets martial arts instructions uploaded to his head and acquires a new skill: "I know kung fu." Skills can let an AI cover an entire process by swapping out knowledge as needed. For example, Jesse Vincent released an interesting free list of skills that let Claude Code handle a full software development process, picking up skills as needed, starting with brainstorming and planning before progressing all the way to testing code. Skill creation is technically very easy, it is done in plain language, and the AI can actually help you create them (more on this in a bit).

Link

TDD is more important than ever

justin.searls.co

Why is verification so important? Because, if you tell an agent to do something that it can't independently verify, then—just like a human developer—the best they can do is guess. And because agents work really fast, each action based on a guess is quickly succeeded by an even more tenuous guess. And then a guess of a guess of a guess, and so on. Very often, when I return to my desk after 30 minutes and find that an agent made a huge mess of the code, I come to realize that the AI didn't suddenly "get dumb," but rather that an application server crashed or a web browser stopped responding and the agent was forced to code speculatively and defensively.

Link

Import AI 435: 100k training runs; AI systems absorb human power; intelligence per watt

jack-clark.net

This gives me an eerie feeling. In most movies where the world ends there’s a bit at the beginning of the movie where one or two people point out that something bad is going to happen – an asteroid is about to hit the planet, a robot has been sent back in time to kill them, a virus is extremely contagious and dangerous and must be stamped out – and typically people will disbelieve them until either it’s a) too late, or b) almost too late. Reading papers by scientists about AI safety feels a lot like this these days. Though perhaps the difference with this movie is rather than it being one or two fringe characters warning about what is coming it’s now a community of hundreds of highly accomplished scientists, including Turing Award and Nobel Prize winners.

Link

Don't Fight the Weights

www.dbreunig.com

Today, in-context learning is a standard trick in any context engineer’s toolkit. Provide a few examples illustrating what you want back, given an input, and trickier tasks tend to get more reliable. They’re especially helpful when we need to induce a specific format or style or convey a pattern that’s difficult to explain1.

When you’re not providing examples, you’re relying on the model’s inherent knowledge base and weights to accomplish your task. We sometimes call this “zero-shot prompting” (as opposed to few shot2) or “instruction-only prompting”.

Link

Page 3 of 15