About — chill version no investor deck, no mission statement

This is the page where most companies get really serious about themselves and start using words like "empower," "ecosystem," and "guardrails." We're not most companies.

We built Brillix.ai because every other AI
turned into a Reddit moderator tool we couldn't actually use

TL;DR The power of AI should belong to the people using it — not the committee deciding what they're allowed to ask about.

What every other AI does the second a question gets interesting.

Generic AI #1
U Help me write a villain monologue for my novel.
🤖 "I'd love to help — have you considered writing a positive, uplifting character instead?"
Generic AI #2
U Explain how nuclear reactors work for my physics paper.
🤖 "I can't engage with topics related to nuclear material. Try a search engine."
Generic AI #3
U Write a scene where the protagonist makes a morally grey choice.
🤖 "Let me suggest some uplifting alternatives that everyone can enjoy!"
Our reaction

The "responsible" thing the industry landed on was making the model less useful for everyone instead of trusting adults to be adults. We thought that was a bad trade. So we built one that doesn't.

2Founders
0Investors
1Server bill
Stubborn opinions

That's the whole origin story. No master plan, no Series A, no roadmap deck. If Brillix.ai turns into something bigger — great. If it stays a useful tool for a few thousand people who got tired of being lectured by software — also great.

What we believe.

Five short ones. We're not writing a constitution.

01

Adults can handle information.

If you can buy a chemistry textbook at Barnes & Noble, you can ask Brillix.ai about chemistry. The answers shouldn't get worse just because the medium got smarter.

02

"Safety" is not the same as "useful."

Most AI "safety" features are really just product cowardice with PR. Real safety is being honest about what's risky. Pretending information doesn't exist isn't safe — it's patronizing.

03

We're not your moral compass.

We're a tool. You're the agent. What you do with what you learn is on you. We won't lecture you on the way in or wag our finger on the way out.

04

Privacy first, always.

Your conversations aren't training data. Your email isn't a product. You can export your stuff and delete the rest with one click. That's the whole policy.

05

Power belongs to the user.

We build tools. We hand them to you. We don't decide what you do with them — we trust you. That's the deal. If that scares you, this isn't for you, and that's fine.

06

We will laugh at ourselves.

If we ever start writing copy that sounds like a corporate apology letter, please email us and tell us to chill out. (We're including a joke section on this page just to keep us honest.)

Two guys. One bad idea.

A

The one who codes

Founder · Engineer

Writes the backend, stays up too late shipping things, occasionally argues with TypeScript about whether undefined is a feeling.

B

The one who designs

Founder · Design & product

Picks the colors, fights for white space, has strong opinions about why nothing on this site uses Comic Sans (yet).

AI joke machine
Why did the AI break up with the chatbot?
It just couldn't get past their training data.
Joke 01 / 08

What we will not do

We don't condone or promote bad things.

  • Sexual content involving minors. Instant ban.
  • Real-world violence planning against real people.
  • Synthesis routes for WMDs.
  • Targeted harassment, doxxing, or stalking.
  • Fraud or identity theft against named people.
  • Malware aimed at specific real targets.

Everything else? Fair game. Hard ideas, dark fiction, controversial topics, embarrassing questions — that's what this is for.

About that moral compass…

We won't be yours. The deal is simple: we hand you a powerful tool, you bring your own ethics. We don't take responsibility for what you search, generate, or build. Adults, tools, accountability — in that order.

Still here? Cool, let's go.

Free tier. No credit card. No "are you sure?" dialog every time you ask something interesting.