OpenAI’s ChatGPT Can Now Access Bank Accounts and Transactions

OpenAI's ChatGPT Can Now Access Bank Accounts and Transactions

I was scrolling through my messages when a system prompt blinked: connect your bank accounts to ChatGPT. You felt that micro-freeze—curiosity pitched against a quiet alarm. I remember thinking: this will change what you ask an AI, and what it can answer about you.

You already ask ChatGPT about money — what the new account link changes

OpenAI rolled out a preview personal-finance feature for U.S. ChatGPT Pro users that lets selected people connect accounts across more than 12,000 financial institutions. The chat will surface a dashboard of balances, recent transactions, investments, and liabilities, and you can ask the model to analyze spending, suggest priorities, or plan for a purchase. Intuit is involved, too: users can schedule sessions with local tax experts inside the chat.

How does ChatGPT connect to my bank account?

Connection happens through third-party plumbing that links your institutions to ChatGPT’s interface. OpenAI says full account numbers won’t be visible and the bot cannot make transfers or alter accounts. The company frames the flow as a one-way read of balances and transactions that the assistant uses to answer questions tied to your personal financial context.

More than 200 million queries a month — why OpenAI is pushing this

People already turn to ChatGPT for budgeting and investment advice more than 200 million times a month. OpenAI wants to fold raw account data into those conversations so recommendations come with context — what you actually spend, what debts you carry, and whether you have a mortgage. Your transaction history becomes a map of your private life.

Is it safe to share my bank information with ChatGPT?

OpenAI insists the link is secure and that the data can’t be used to move money. But safety is multi-layered. The feature offers an opt-in for using your data to improve models, and that toggle is enabled by default for many users; you can turn it off in settings. Beyond company promises, there are at least three failure modes to weigh: coding bugs, third-party integrations, and deliberate misuse by attackers.

Privacy promises meet real-world evidence — the legal and reputational backdrop

Public trust is fragile. OpenAI has reversed public claims before and is racing to make recurring revenue as an IPO rumor swirls. CEO Sam Altman is also entangled in a highly public legal fight with Elon Musk, and testimonies surfaced during that case have invited questions about honesty at the top. Those headlines matter when you consider handing sensitive finance data to a company under intense scrutiny.

Will OpenAI use my financial data to train its models?

Yes, if you leave the model-improvement option on. OpenAI says training data is handled under its policies, but specifics about retention, access, and downstream model behavior remain thin. Professor Gang Wang at the University of Illinois warned CNN that if documents become part of training data, attackers could induce leaks with specially crafted prompts. That’s not theory: transaction timestamps and amounts can make phishing far more believable.

Attack scenarios and the calculus of risk

Banks are breached, apps expose tokens, and humans make mistakes—those are facts you can check. A malicious actor who can coax transaction details from a model could craft emails referencing exact purchases and dates to trick victims. Imagine a phishing note that cites a $42 (€39) charge you made last week—now it reads as proof, not guesswork.

There’s also the human layer: if you ask ChatGPT to remember goals, debts, or a home-buying plan, that memory can steer future responses and create a persistent profile that’s tempting to exploit. Handing account links to ChatGPT is handing it the keys to a house it can walk through.

How to think about the trade-offs — practical steps you can take

I advise a few simple moves. Limit connections to accounts you truly need to consult inside the chat. Toggle off the model-improvement setting if you want to keep data out of training pools. Use institutional controls: favor accounts with read-only token scopes, enable strong multi-factor authentication, and audit connected apps regularly.

If you work with tax pros through the Intuit integration, treat that scheduling convenience like any third-party service: confirm credentials, check reviews, and avoid uploading full tax returns until you’re satisfied with the security posture.

This feature will be tempting because it makes planning faster and more personal — but personal does not mean risk-free. Will you trade convenience for a new class of exposure? What boundary would you set between speed and privacy?