AWIP's Stance on Artificial Intelligence

AI's measure of success is deception: the best AI content is indistinguishable from human work. When quality means "can you tell this wasn't made by a person?", deception becomes the product specification. AI has legitimate applications in data classification and highly structured environments where it transparently does computational work. But innovation, ingenuity and genuine connection require human presence that algorithms cannot replicate.

Artificial Intelligence

AWIP's Stance on Artificial Intelligence

Jon Crowder

AI: Deceptive By Design

There's something revealing about how we measure artificial intelligence success. The best AI content is the stuff you can't distinguish from human work. The most impressive chatbot is the one that passes for a real person. The breakthrough in generated images comes when you can no longer tell they're synthetic.

Think about what that means. We're celebrating technology specifically for its ability to deceive.

At Another Web is Possible, we're deeply sceptical of AI precisely because its core value proposition is disguise. When the measure of quality is "can you tell this wasn't made by a human?", you've built deception into the product specification.

The Problem With Invisible AI

Effective AI content only works by hiding its nature. Generated text that sounds too artificial is rejected by the user. It's edges smoothed-off by the nature of large language model's processing, or over-indexing of certain phrases in the training data. Any images that are obviously synthetic are uncanny and offputting. The otherworldly shine of Gemini's take on hyper-realism. The affectionately named "piss filter" on ChatGPTs image generator that gives everything a weird yellow/sepia tone. The entire development trajectory of these tools points towards making them indistinguishable from human output.

This creates fundamental problems for any business claiming to value authenticity and transparency.

When readers can't tell whether content was written by a person or generated by a model, trust becomes impossible. When customers can't determine if they're engaging with genuine human insight or algorithmic output, the relationship is transactional rather than meaningful.

Companies deploying AI at scale aren't advertising it. They're hiding it. It's an ugliness to be hidden because they understand that people value human connection, human creativity, and human judgement far more than they want to see those things replaced by machine. So they use AI to fake those qualities whilst hoping nobody notices.

And my friends... That's not innovation. It's fraud with a bit of the ol' razzle dazzle.

Where AI Works

Don't read this as a blanket rejection of all algorithmic tools. There are legitimate applications where AI adds genuine value without requiring deception.

Data classification at scale can make sense. When you need to categorise thousands of support tickets, identify patterns in user behaviour, or flag potential security issues, algorithmic processing can be genuinely useful. So long as there's a human in the loop to validate and confirm. Nobody's pretending the algorithm is a person. It's doing some structured, repetitive work that computers handle well.

Highly structured environments where variables are limited and outcomes are measurable can benefit from algorithmic decision-making. Detecting fraudulent transactions, optimising delivery routes, or identifying manufacturing defects all involve pattern recognition within defined parameters.

These applications succeed precisely because they're not trying to replicate human creativity or substitute for human connection. They're computational tools doing computational work, transparently and without pretence. The moment, however, you decide that the human-in-the-loop is no longer needed and that the tools can run the workshop, you have created an enironment that is 'anti-user'

The Irreplaceable Human Element

Innovation doesn't come from interpolating existing data. It comes from humans making unexpected connections, challenging assumptions, and imagining possibilities that don't yet exist in any training dataset.

Ingenuity isn't pattern-matching and graphing. Ingenuity is seeing problems differently, combining disciplines in novel and interesting ways, and developing solutions that nobody predicted. To some extent it's understanding the need-behind-the-need which goes unarticulated.

Connection requires presence. Real relationships form between humans who understand each other's context, emotions, and intentions. No algorithm can replicate the experience of being genuinely understood by another person.

When you read content created by AI, you're engaging with a statistical model of human communication. When you read content written by a person, you're connecting with someone who has lived experiences, holds genuine beliefs, and brings their full self to the work.

That difference really matters.

Our Commitment to Transparency

Another Web is Possible will never use AI to generate content or creative work that we present as human-authored. To some extent it's embedded in all modern web tools whether we like it or not, so switching it off completely is not an option. Every article, every case study, every piece of advice comes from real people with real expertise and real accountability for what we produce.

When we do use AI tools for legitimate purposes like data classification, elements of code writing or structured analysis, we'll tell you. Explicitly. You won't need to guess whether you're reading generated content or human writing. You'll know. I can tell you right now, it even generated some of the container layouts for this Webflow website, but it needed heavy editing in order to be right, and realistically, I could have achieved a similar outcome without it.

Your data will never be submitted to any AI model without your complete understanding and informed consent. We won't feed your analytics into a language model, process your customer information through algorithmic systems, or use your business data to train any tool without explicit permission and clear explanation of how it will be used.

We respect the relationship between our business and yours. When you share information with us, you deserve to know exactly what happens with it.

The Long-Term Cost of Deception

Companies rushing to deploy AI without transparency are building on unstable foundations. When customers discover that content they believed was human-authored was actually generated, trust collapses. When employees realise their work is being replaced by tools designed to mimic their output, morale disappears immediately.

Any short-term efficiency gains are dwarfed by the long-term relationship damage.

More fundamentally, businesses that substitute algorithmic output for human creativity are training themselves out of the skills that actually create value. Innovation, strategic thinking, and genuine customer understanding all require human judgement that atrophies when outsourced to pattern-matching algorithms.

What We Believe

We believe users deserve to know when they're engaging with humans versus algorithms. We believe authentic human work has irreplaceable value that shouldn't be disguised or substituted. We believe businesses should build on genuine capabilities rather than computational mimicry.

Most importantly, we believe another web is possible. One where technology serves human connection and does not seek to replace it. Where tools amplify human creativity instead of imitating it. Where businesses compete on genuine innovation and authentic relationships rather than whoever can most convincingly fake human output.

That's the web we're working towards. And it starts with being honest about what AI actually is: a tool with legitimate structured applications, but one that fundamentally cannot and should not replace genuine human insight, creativity, and connection.

You deserve better than algorithmic imitation. We're committed to providing it.

Want to read more?

Ready to get started?