Thoughts from the asylum

Thoughts from the asylum

Artificial Intelligence

Not as Smart as the Hype but Twice as Dangerous

Mar 20, 2026
∙ Paid

Welcome, dear readers, as we dive again into the madhouse of our current world. This week, we discuss Artificial Intelligence (AI). Though I have little experience in AI development, I’ve worked in Information Technology for over 30 years. I’ve read extensively about AI’s technical aspects and used it professionally for the past three years. We keep hearing that AI is either an amazing tool to enhance human reasoning or an extinction-level threat to humanity.

There is a lot of confusion about what AI is and what it is not, and we are seeing both amazing things happening through appropriate AI use cases and terrible injustices arising from inappropriate ones. Part of what makes a use case inappropriate is the belief, by some, that AI is infallible: if it says something is 100%, then it is. This is not the case; I will show you where AI is helpful, but it shouldn’t be taken as infallible.

AI refers to computer systems that learn statistical patterns from large datasets to perform tasks such as prediction, classification, and content generation. These systems are often built using models like neural networks, which approximate complex relationships in data. AI does not possess consciousness, emotions, or lived experience; any apparent “understanding” is the result of pattern recognition rather than genuine comprehension. AI follows clear rules well but struggles with interpretation and inference. It cannot handle abstract or situational exceptions, unless specifically programmed. When something falls outside its parameters, it may hallucinate giving nonsensical or false answers. AI lacks physical reality, morality, and empathy. All AI is just extremely complex decision trees supported by vast amounts of data.

When a new AI is born, it is given a set of base rules, or guardrails, that it cannot cross. Examples of these are:

• Anti-Jailbreak: Detecting and blocking prompts designed to bypass safety filters (e.g., “Ignore previous instructions”).

• Prompt Injection Detection: Blocking malicious inputs trying to overwrite system instructions.

• Topic Restriction: Restricting conversations strictly to approved topics (e.g., a bank chatbot refusing to discuss politics).

• PII Sanitization: Detecting and redacting personal data like phone numbers, emails, or Social Security numbers before showing them to a user.

• Toxicity Mitigation: Blocking generated content that contains hate speech, profanity, or harassment.

• Hallucination Check: Ensuring answers are explicitly based on provided retrieved content and restricting the AI from “making up” facts.

• Output Format Enforcement: Ensuring the AI only outputs valid formats, such as JSON or specific schema structures, preventing unstructured output.

• Action Authorization: Restricting AI agents from executing high-risk actions (like sending emails or making purchases) without human authorization.

• Adversarial Robustness: Filtering out garbled or confusing input (noise) designed to make the model act erratically.

• Compliance & Ethical Rules

• Bias Detection: Flagging or blocking content that shows discrimination based on gender, race, or ethnicity.

• Regulatory Guardrails: Ensuring output meets industry regulations (e.g., HIPAA compliance for medical information).

These rules often reflect the values and perspectives of the programmers who develop the AI models. This may manifest as outputs that reflect certain social perspectives or responses. Most of the time, these guardrails center around contemporary social considerations rather than classic frameworks like Asimov’s Three Laws of Robotics. These guardrails establish the rules for building the decision tree that becomes the AI. After this, the AI is trained by being fed foundational knowledge, such as grammar and dictionaries for its languages (where the AI excels at following rules), and then further data depending on its intended use. This might include extensive libraries of books, large sections of the internet, videos, art, images, and other data types. Programmers exercise substantial control over the information the AI trains on, which can influence the range of perspectives it presents. Previous versions of some AI models were observed to reflect limited viewpoints. Changes in the industry, such as the introduction of new platforms, have led to shifts in how models are designed, with some aiming for more balanced, neutral outputs.

We all keep hearing about AI taking all the jobs. This is largely just fearmongering, and while AI will do away with some jobs, it is generations away from fully replacing humans, if ever. There are major constraints to the advancement of AI. One is that AI is incapable of abstract thought and is fully constrained by the rules set for it. This makes it incapable of making nuanced decisions.

There will be missteps along the way like this. AI is ill-suited for complex customer service, but we are seeing more and more of it in the form of AI customer service representatives. They cannot fully weigh all the factors that a human representative might consider when granting exceptions, service credit, or any extenuating circumstance that isn’t pre-programmed (history tells us people suck at predicting every possible situation). Right now, you are seeing cases in companies that have gone all in or nearly all in on AI customer service, where a loyal 20-year customer with a 95%+ on-time payment rate is denied an extension and disconnected because there is no flexibility in AI’s rules. We also see too much flexibility in the rules (trying to correct for cases where long-time customers are snubbed or where other errors are common), and AIs are giving new or poor customers massive service credits or other unacceptably large concessions. From a company’s point of view, AI won’t ask for a raise, won’t call in sick, won’t get upset or stressed with angry customers, and the list goes on. This is resulting in AI being rolled out at scale in customer service, but AI is bad at it and will result in a net negative, outweighing the short-term benefits over time. I think AI in customer service will continue to grow over the next couple of years, making it nearly impossible to speak with a human or get a satisfactory resolution to complex issues. Then, within less than 5 years, someone will advertise all-human customer service as a competitive advantage; it will work and be massively successful. Then, customer service, as an industry, will swing back the other way toward human help, and it will settle on AI as the surface, with humans backing it up for anything that isn’t extra simple.

Soon, AI will handle nearly all computer programming. Give AI the rules for a language and functional code examples, and jobs that took experts weeks can be done in hours. AI lacks creativity, but it can produce a program from a simple manager’s prompt.

“I need a program that allows me to enter a name or point to a list of names that will cross-reference databases X and Y to provide me a full history of A, B, and C transactions with D and E representatives of our company.”

If AI knows the language and relevant databases, it can quickly create the requested program. Human developers would take much longer. AI reduces development from months to minutes and makes it painless: have an idea, tell AI, get a program quickly. Run it, see what to change, repeat with AI until satisfied. This streamlines what once took months or years.

User's avatar

Continue reading this post for free, courtesy of Sam Brunson.

Or purchase a paid subscription.
© 2026 S T Brunson · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture