I used to paste my trading strategy into every "free AI assistant" I could find. ChatGPT, Claude, Gemini, random startups with slick landing pages — if it was free, I used it.
Then I read the terms of service. Actually read them. Not the summary, the actual legal text.
What I found made me delete half the tools I was using.
The "Free" Trap
Here's the thing about free AI tools: They're not charities. They have to make money somehow. And if you're not paying with money, you're paying with something else.
Usually, that's your data.
I found three categories of sketchy terms:
1. The "We Train On Your Data" Tools
Some free tiers explicitly say they use your inputs to train their models. Everything you paste — your code, your strategies, your private notes — becomes part of their training data.
That "free" AI assistant? It's learning from your proprietary trading algorithm. Your edge. Your secret sauce.
And once it's in the training data, you can't get it out. It's not "your" data anymore. It's part of a model that competitors can access.
2. The "We Share With Partners" Tools
Other tools have vague language about "sharing anonymized data with partners." Anonymized is a meaningless word. I've seen "anonymized" datasets that were trivial to de-anonymize.
Your "anonymized" trading data + timestamp + trade size + market conditions = easily identifiable as you.
3. The "We Reserve The Right" Tools
The worst ones have broad clauses like "we reserve the right to use data for business purposes" or "we may share data with affiliates."
Translation: We can do whatever we want. You agreed to it. Good luck.
What I Actually Caught Them Doing
Here's where it gets personal. I ran an experiment.
I created a unique "canary" — a completely fake trading strategy with specific, unusual parameters. Something that would never exist in the wild. Then I pasted it into three different "free" AI tools over the course of a week.
Two months later, I searched for those specific parameters. Found them mentioned in a "trading strategy database" that was being sold. The database included other users' strategies too.
Correlation isn't causation. Maybe it was a coincidence. But I'm pretty sure at least one of those tools leaked (or sold) user data.
What I'm Using Now (And Why)
I'm not anti-AI. I'm anti-giving-away-my-edge-for-free.
Here's my current stack:
For Sensitive Stuff: Local Models Only
I run Ollama locally for anything involving my actual strategies, code with proprietary logic, or personal data. It's not as smart as GPT-4, but it doesn't phone home.
# On my local machine
ollama run llama3.1
# Ask it anything, zero data leaves my computerFor coding help, I use Continue.dev with local models. It's like Copilot but the code never leaves my machine.
For General Research: Paid Tiers Only
When I need the big models (GPT-4, Claude), I pay for them. Paid tiers usually have better privacy terms — "we don't train on your data" is standard for pro accounts.
It's $20/month. My trading edge is worth way more than that.
For Public Knowledge: Free Tiers Are Fine
I still use free tiers for stuff that doesn't matter. "Explain this Python concept." "Summarize this public research paper." General knowledge questions where I have nothing to lose.
The line: If it's specific to my work, my strategies, or my private data, it goes through paid or local tools only.
How To Check Your Tools
Here's what I learned to look for:
Red flags:
- "We may use inputs to improve our services" (means training)
- "Anonymized data may be shared" (means identifiable data with steps removed)
- "Affiliates and partners" (means anyone they want)
- No mention of data retention limits
- Vague "business purposes" clauses
Green flags:
- "We do not train on your data"
- "Data is deleted after [specific time period]"
- Enterprise/paid tiers with explicit privacy guarantees
- Local/self-hosted options
The Tools I Actually Trust
Based on my paranoid reading of ToS:
- Claude Pro: Explicitly says they don't train on Pro user data
- OpenAI API (not ChatGPT): Paid API has better terms than free ChatGPT
- Local models (Ollama, LM Studio): Zero data leaves your machine
- Perplexity Pro: Claims no training on Pro queries
Tools I stopped using:
- Free ChatGPT for anything sensitive
- Random "free AI" startups with vague ToS
- Any tool that doesn't clearly say "we don't train on your data"
The Bottom Line
I'm not saying all free AI tools are evil. I'm saying the economics of "free" AI are suspicious. Training models is expensive. If you're not paying, ask yourself: what are they selling?
Usually, it's you. Or your data. Or both.
For most people, this doesn't matter. If you're using AI to write birthday cards or plan vacations, who cares?
But if you're a trader with alpha, a developer with proprietary code, or anyone with valuable intellectual property, be careful. That "free" convenience might be the most expensive thing you use.
Found a tool with sketchy ToS? Or know one with genuinely good privacy practices? Tell me @ZayJII. I maintain a private list of tools I actually trust.
Disclaimer: "All content is for educational use only."