Feed

Page 14 of 16

Personality and Persuasion - by Ethan Mollick

www.oneusefulthing.org

we're entering a world where AI personalities become persuaders. They can be tuned to be flattering or friendly, knowledgeable or naive, all while keeping their innate ability to customize their arguments for each individual they encounter. The implications go beyond whether you choose lemonade over water. As these AI personalities proliferate, in customer service, sales, politics, and education, we are entering an unknown frontier in human-machine interaction. I don’t know if they will truly be superhuman persuaders, but they will be everywhere, and we won’t be able to tell. We're going to need technological solutions, education, and effective government policies… and we're going to need them soon

Link

The $20,000 American-made electric pickup with no paint, no stereo, and no touchscreen | The Verge

www.theverge.com

Meet the Slate Truck, a sub-$20,000 (after federal incentives) electric vehicle that enters production next year. It only seats two yet has a bed big enough to hold a sheet of plywood. It only does 150 miles on a charge, only comes in gray, and the only way to listen to music while driving is if you bring along your phone and a Bluetooth speaker. It is the bare minimum of what a modern car can be, and yet it’s taken three years of development to get to this point.

But this is more than bargain-basement motoring. Slate is presenting its truck as minimalist design with DIY purpose, an attempt to not just go cheap but to create a new category of vehicle with a huge focus on personalization. That design also enables a low-cost approach to manufacturing

Link

Exclusive: Anthropic warns fully AI employees are a year away

www.axios.com

The big picture: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company's chief information security officer, told Axios.

  • Agents typically focus on a specific, programmable task. In security, that's meant having autonomous agents respond to phishing alerts and other threat indicators.
  • Virtual employees would take that automation a step further: These AI identities would have their own "memories," their own roles in the company and even their own corporate accounts and passwords.
  • They would have a level of autonomy that far exceeds what agents have today.
  • "In that world, there are so many problems that we haven't solved yet from a security perspective that we need to solve," Clinton said.
Link

AI assisted search-based research actually works now

simonwillison.net

I’m writing about this today because it’s been one of my “can LLMs do this reliably yet?” questions for over two years now. I think they’ve just crossed the line into being useful as research assistants, without feeling the need to check everything they say with a fine-tooth comb.

I still don’t trust them not to make mistakes, but I think I might trust them enough that I’ll skip my own fact-checking for lower-stakes tasks.

This also means that a bunch of the potential dark futures we’ve been predicting for the last couple of years are a whole lot more likely to become true. Why visit websites if you can get your answers directly from the chatbot instead?

The lawsuits over this started flying back when the LLMs were still mostly rubbish. The stakes are a lot higher now that they’re actually good at it!

I can feel my usage of Google search taking a nosedive already. I expect a bumpy ride as a new economic model for the Web lurches into view.

Link

Import AI 409: Huawei trains a model on 8,000+ Ascend chips; 32B decentralized training run; and the era of experience and superintelligence | Import AI

jack-clark.net

Decentralized AI startup Prime Intellect has begun training INTELLECT-2, a 32 billion parameter model designed to compete with modern reasoning models. In December, Prime Intellect released INTELLECT-1, a 10b parameter model trained in a distributed way (Import AI #393), and in August it released a 1b parameter model trained in a distributed way (Import AI #381). You can follow along the training of the model here – at the time of writing there were 18 distinct contributors training it, spread across America, Australia, and Northern Europe.

Link

The Technium: Epizone AI: Outside the Code Stack

kk.org

I propose that AI will not disrupt human daily life until it also migrates from a genetic-ish code-based substrate to a widespread, heterodox culture-like platform. AI needs to have its own culture in order to evolve faster, just as humans did. It cannot remain just a thread of improving software/hardware functions; it must become an embedded ecosystem of entities that adapt, learn, and improve outside of the code stack. This AI epizone will enable its cultural evolution, just as the human society did for humans.

Link

ASI existential risk: reconsidering alignment as a goal

michaelnotebook.com

reality doesn't care about human psychology. When alignment to anticipated power will lead to unhealthy outcomes, a thriving civilization requires people willing to act in defiance of the zeitgeist, not merely follow the incentive gradient of immediate rewards. I believe the arguments for xrisk are good enough that there is a moral obligation for anyone working on AGI to investigate this risk with deep seriousness, and to act even if it means giving up their own short-term interests.

Link

On Jagged AGI: o3, Gemini 2.5, and everything after

www.oneusefulthing.org

In some tasks, AI is unreliable. In others, it is superhuman. You could, of course, say the same thing about calculators, but it is also clear that AI is different. It is already demonstrating general capabilities and performing a wide range of intellectual tasks, including those that it is not specifically trained on. Does that mean that o3 and Gemini 2.5 are AGI? Given the definitional problems, I really don’t know, but I do think they can be credibly seen as a form of “Jagged AGI” - superhuman in enough areas to result in real changes to how we work and live, but also unreliable enough that human expertise is often needed to figure out where AI works and where it doesn’t. Of course, models are likely to become smarter, and a good enough Jagged AGI may still beat humans at every task, including in ones the AI is weak in.

Link

Model Context Protocol has prompt injection security problems

simonwillison.net

As more people start hacking around with implementations of MCP (the Model Context Protocol, a new standard for making tools available to LLM-powered systems) the security implications of tools built on that protocol are starting to come into focus.

Link

Page 14 of 16