TL;DR: I'm sharing some ways in which we at Clear have democratised AI adoption in the company; by giving people access to frontier models, reducing dev friction, and actively fostering a learning culture around AI.
Here are some things we have tried at Clear to enable everyone in the team to adopt AI tools.
Launch an internal chat tool
We launched an internal chat tool — similar to ChatGPT or Claude. This is to enable everyone in the team to have un-metered access to the best AI models — for example, we just added support for Gemini 2.5 Pro.
This internal tool works on top of APIs provided by OpenAI, Anthropic, Google, etc. There are a lot of open source chat tools that take API keys and work on top of them — we're using Open Web UI.
Here's why you should do this:
-
Free tiers of ChatGPT, Claude have very limited access to the frontier models. Usually, free tiers are limited to the 'mini' models. If your team hasn't used the most capable models, they don't know what LLMs are capable of.
-
It's very expensive to pay for the pro or team tiers of each provider. For example: ChatGPT team is $25 per member. APIs are usually cheaper, especially since most users won't be chatting enough to cost you $25 in API credits.
-
Using APIs gives you flexibility to work with multiple providers in parallel: it makes no sense to pay $25 for ChatGPT, $25 for Claude, $25 for Gemini individually.
-
Free tiers are also risky from a security and privacy point. When you use the free versions, most providers reserve the right to train on your chats and use them to improve the models. In a corporate scenario: having employees use personal accounts to do work is very risky from a compliance perspective.
Make it easy to build AI projects internally
We built an internal proxy which can be used by our developers to access providers like Anthropic, OpenAI, etc. Both directly and via AWS Bedrock, Microsoft Azure.
This drastically reduces friction when building an AI-powered use case: developers can quickly do POCs and experiments without asking someone to provision a specific API key on their behalf.
We learned this the hard way: during one hackathon that we ran, I personally had to spend a lot of time just making sure that every team had the required access!
There are many open source solutions for this. We use something custom, but right now it looks like LiteLLM is a great option.
Enabling our team to quickly build AI-powered applications, without asking for permission, has proven to be a huge win. Sometimes, the best ideas come bottom up — for example, one team used this capability to enable automatic code reviews on Github powered by aider.
Pedagogy and Evangelism
There is also a need to do deliberate culture shaping — making sure you are bringing everyone in the team along with you on the journey.
Examples of this include:
-
Writing internal notes on AI — this blog actually started out as internal writings. The field is very fast moving, keeping up with AI progress is almost a full time job.
-
There is a lot of value in having some team members synthesise their learnings and sharing with your broader team.
-
There is also a lot of value in simplifying jargon and giving clear explanations of what different AI concepts mean.
-
-
Showcasing use cases, doing demos: people learn by association. If you come across a novel way to use AI, it should be shared broadly in your team! The best ideas will come from your team, find a way to enable these ideas more broadly.
The recent Shopify memo is a great example of this in action.
This is just the start. We are still learning about how to best adopt AI in our daily lives. I'd be very curious to learn how everyone else is approaching this problem.
Please reach out to me on email (ankit at clear dot in) or on twitter.