Responsible AI
AI Ethics & Responsible AI
Every AI system we build is reviewed against these principles before go-live. Not because it's required — because it's right.
What is Code and Trust's approach to responsible AI?
Code and Trust implements responsible AI practices on every project: bias testing against representative data samples before deployment, human-in-the-loop review for high-consequence decisions, explainability requirements for regulated industries, and rate limiting to prevent AI from making too many consequential actions without human review. These aren't checkbox practices — they're built into the system architecture.
How does Code and Trust test AI systems for bias?
Every AI classification or ranking system we build is tested against representative data samples from all affected groups before deployment. We measure disparate impact — whether the system produces systematically different outcomes for different demographic groups. Bias testing results are documented and reviewed with clients before go-live. We've identified and corrected bias issues in 3 client AI systems during pre-launch testing.
How We Conduct Bias Testing
- —
Test data sampling strategy: representative samples from all demographic groups the system will affect — not just a random split of available training data.
- —
Disparate impact analysis: measure whether outcome rates differ by more than a defined threshold (typically 20%) between groups.
- —
Bias threshold by use case: stricter thresholds for high-consequence use cases (hiring, medical, financial) than for low-stakes recommendations.
- —
Documentation: every bias test run is logged with inputs, outputs, metrics, and client sign-off before deployment.
- —
Remediation: when bias is detected, we identify the source (training data, feature selection, or model architecture) and correct before proceeding.
When does AI need human oversight?
High-consequence decisions — medical recommendations, financial approvals, legal determinations, hiring decisions — require human review before action is taken. Code and Trust designs human-in-the-loop checkpoints into any AI system that affects individuals' rights, finances, health, or employment. The AI makes a recommendation; the human approves or overrides it.
| Decision Type | Human Review Required? | Design Pattern |
|---|---|---|
| Medical / clinical recommendation | Always | AI suggests, clinician confirms |
| Financial approval / credit decision | Always | AI scores, human approves |
| Hiring / employment decision | Always | AI ranks, human interviews |
| Legal / compliance determination | Always | AI flags, counsel reviews |
| Content recommendation | Audit logs only | AI acts, logs reviewed weekly |
| Low-stakes personalization | No | AI acts autonomously |
What makes an AI decision explainable?
An explainable AI decision is one where the system can show why it reached a particular output — which inputs had the most weight, which rules were triggered, or which training examples are most similar to the current case. For regulated industries (healthcare, finance, government), explainability is a compliance requirement. We build explanation interfaces into every AI system deployed in regulated contexts.
Explainability is not just a compliance concern — it's also a product quality signal. When users can understand why the AI made a recommendation, they trust the system more and provide better feedback. We implement explanation interfaces as first-class features, not debug output.
What AI use cases does Code and Trust decline to build?
Code and Trust does not build AI systems designed to deceive people about being AI, systems intended to manipulate behavior through psychological exploitation, surveillance systems that operate without the knowledge of those surveilled, or systems that automate consequential decisions about individuals without any human review pathway. These are hard limits regardless of client or compensation.
Deceptive AI personas
Any system designed to make people believe they are communicating with a human when they are not.
Psychological manipulation
Systems that exploit cognitive biases, emotional vulnerabilities, or addiction loops to drive behavior.
Covert surveillance
Surveillance systems — audio, video, location, behavioral — that operate without the knowledge of those being surveilled.
Fully automated high-stakes decisions
Any system that makes final, binding decisions about individuals' rights, employment, health, or finances with no human review pathway.
How does Code and Trust handle data used to train or fine-tune AI models?
Client data used for AI fine-tuning or RAG systems remains client data — it is not used for any other purpose, shared with any other client, or retained by Code and Trust after project completion. We document all data flows for AI systems in client-facing architecture documentation. No client data is fed to third-party AI training pipelines without explicit written consent.
- —
Data used to build client AI systems is isolated per engagement — no cross-client data sharing, ever.
- —
Architecture documentation includes a complete data flow diagram for every AI component, delivered to the client.
- —
Third-party AI providers (OpenAI, Anthropic, Google) are configured with training opt-out where the API supports it.
- —
After project completion, Code and Trust deletes all client data from development and staging environments — documented and confirmed in writing.
- —
RAG (retrieval-augmented generation) indexes are built and stored in client-owned infrastructure, not Code and Trust infrastructure.
Ready to build AI the right way?
Whether you're exploring AI for the first time or cleaning up a system that wasn't built with these principles — start with an AI Audit. We'll tell you exactly where you stand and what we'd do differently.