← Back to home

The Technium: The Trust Quotient (TQ)

→ original

Right now, AIs own no responsibilities. If they get things wrong, they don’t guarantee to fix it. They take no responsibility for the trouble they may cause with their errors. In fact, this difference is currently the key difference between human employees and AI workers. The buck stops with the humans. They take responsibility for their work; you hire humans because you trust them to get the job done right. If it isn’t, they redo it, and they learn how to not make that mistake again. Not so with current AIs. This makes them hard to trust.

Every company, and probably every person, will have an AI agent that represents them inside the AI system to other AI agents. Making sure your personal rep agent has a high trust score will be part of your responsibility. It is a little bit like a credit score for AI agents. You will want a high TQ for yours. Because some AI agents won’t engage with other agents having low TQs. This is not the same thing as having a personal social score (like the Chinese are reputed to have). This is not your score, but the TQ score of your agent, which represents you to other agents. You could have a robust social score reputation, but your agent could be lousy. And vice versa.