
WHY DON’T WE TALK ABOUT MONEY?
January 28, 2026Artificial intelligence is being promoted as the next frontier for investing. But can you trust AI? Not every AI app deserves your confidence; even the credible ones have limitations. Before you trust a machine with your money, it helps to know how these systems actually work and where their limits lie.
The Rise of AI in Investing
Since ChatGPT, Claude, Copilot and other large language models became mainstream, AI-powered investing has become a hot topic. Financial start-ups and established firms now claim to use machine learning, data science, or natural language processing to uncover market patterns invisible to humans.
The idea is appealing. AI can read thousands of reports in seconds, track sentiment across the internet, and crunch decades of financial data faster than any analyst. But that speed can create the illusion of certainty.
How AI Actually Works
AI tools don’t “think” or “predict” in a human sense. They recognise statistical patterns in data and use those patterns to estimate probabilities. A model trained on years of historical information can learn what has usually followed certain market conditions. But when the environment changes, as it constantly does, those patterns can break down quickly.
Large language models like ChatGPT or Claude don’t directly analyse live market or financial data unless they are integrated with external data sources and quantitative tools. They work with language, summarising information and identifying sentiment. They can interpret, explain, or generate text about investing, but they don’t create forecasts or guarantee outcomes. AI can help investors process information more efficiently. Still, it cannot predict the future, eliminate uncertainty or make risk disappear.
Not All AI Is Created Equal
When a platform says it uses AI, that can mean many different things. Some tools automate data collection or summarise reports. Others use algorithms trained on historical market data to make statistical predictions or identify patterns. A few may even execute trades automatically with little human oversight.
However, not all AI is built the same way. Some systems are simple automated systems that follow programmed rules to trigger actions. Others use machine learning to analyse data and adjust based on patterns in past behaviour. Then there are large language models, which process words rather than numbers, helping interpret market commentary or investor sentiment. Each operates differently and carries different risks depending on the decision-making power it is given.
Think of AI investing as a spectrum:
- Assistive AI helps gather and interpret data faster.
- Advisory AI suggests possible investments or portfolio moves.
- Autonomous AI makes and acts on decisions without direct human control.
Each level involves more complexity and more risk if the system goes wrong.
Different Data, Different Risks
Not every organisation trains or runs its AI on the same kind of information. Some start-ups rely almost entirely on public and online sources such as news feeds, company filings and social sentiment (e.g. consumer apps, robo-advisors, crypto platforms). Larger financial institutions (e.g. banks, asset managers, super funds) may integrate proprietary datasets, licensed feeds (like Bloomberg/Refinitiv), and internal risk models into their AI systems. Others use a mix, plugging commercial data sets into pre-built AI models.
Breadth of data doesn’t always equal depth. Systems trained mainly on widely available information may lack uniqueness or insight, since the same data is open to every other AI tool. Public data also tends to be historical, helping explain the past more than it can identify the future.
That said, the sheer volume of public information can still be valuable. Well-designed AI systems can mine vast data sets to spot patterns or highlight interesting opportunities that humans might overlook.
The same also applies to proprietary or internal data. Having exclusive information doesn’t automatically make a model smarter or more accurate. The value depends on the quality of the data, how it’s used, and what kind of opportunity it is trained to look for.
AI For Other Types of Investments
AI tools are now appearing across all types of investment products, not just shares. Some claim to analyse property markets and identify suburbs “about to boom.” Whilst others claim to automate crypto trading or rebalance superannuation portfolios using predictive analytics. Even managed funds and ETFs are starting to promote algorithmic selection or “AI-enhanced” strategies.
When considering these, the same questions apply regardless of the asset class:
- Who designed it?
- What data does it use?
- How is performance verified?
- What happens when market conditions shift?
Common Misconceptions About AI and Investing
A few myths are worth clearing up:
AI can predict the market – It just can’t. AI recognises patterns in past data, not future events. Markets move because of human decisions, shifting economies, and unforeseen events that no algorithm can anticipate. At best, AI can identify trends or correlations that used to matter, but those relationships often break down when conditions change, and things are changing faster than ever.
AI can remove all human error – It can reduce emotional decisions, but it doesn’t eliminate bias or poor data. AI models still depend on how they are built and what information they are fed. If the underlying data is incomplete, outdated or biased, the results will reflect those same flaws. AI can remove emotion, but not error.
AI learns and improves endlessly – No, it doesn’t. Models degrade as markets evolve and must be retrained to stay accurate. Over time, the statistical patterns that drive performance change, so an AI model that worked well last year may fail in a new environment unless it’s continually updated.
AI is neutral – It is not. Algorithms reflect the goals, data and assumptions of the people who create them. In finance, that means an AI’s “view” of success is defined by human judgement, not objective truth. e.g. if it’s been designed to chase short-term returns, minimise volatility or follow a specific strategy. The outcomes mirror the priorities built into the system.
Understanding these limits helps investors approach AI claims with the right level of scepticism.
How to Judge What You’re Being Sold
Before trusting any app or platform that uses AI to handle your money, pause and look beneath the marketing. Ask these questions:
- What kind of AI is this? Is it analysing text, numbers, or both? A language model can summarise market news, but cannot predict a downturn.
- Where does its data come from? Markets depend on timely and accurate data. If the AI uses poor-quality or outdated information, its insights will be less reliable.
- Who built and trained it? Was it created by finance professionals with domain expertise, or general technologists? The difference matters, as models can be designed to serve the provider’s commercial goals rather than the investor’s interests.
- How has it been tested? Credible investing products and companies can show extensive testing, internal and independent reviews, or performance audits. Claims of “consistently beating the market” should be seen as red flags.
- Is there human oversight? The best systems use humans to interpret and validate AI outputs. Unchecked automation can magnify small mistakes into large losses.
- What happens when the AI is wrong? Algorithms can misread events or chase patterns that no longer exist. Always check who carries the financial risk if the system fails.
The Hidden Risks
Every technological advantage comes with trade-offs. AI models are only as good as the data and assumptions they rely on. If that data is incomplete, biased or outdated, the model’s output will be flawed.
Many algorithms are also “black boxes.” Even their creators can’t always explain why a model made a particular decision. This lack of transparency makes it difficult for investors to evaluate reliability.
Then there’s the regulatory gap. In Australia, most AI investing tools are not licensed financial advisers. They sit outside normal consumer protection rules. If you lose money following an AI-generated recommendation, you may have little recourse.
And when AI systems fail, they fail fast. Automation can magnify errors in seconds, what professionals call “risk amplification.” In investing, speed cuts both ways.
Bias, Drift and Marketing Language
AI models can inherit the biases of their training data. If they were trained during a period of growth, they might overemphasise optimism. If they rely on U.S. market data, they might misjudge Australian dynamics. Over time, even well-designed systems suffer from “data drift,” where live conditions diverge from the model’s assumptions.
Be alert to how products are described, as well. Terms like “AI-powered,” “machine-learning enhanced,” or “smart investing” are typically marketing phrases, and their use is largely unregulated, with little need to back up their claims. They sound really cool and technical, but actually reveal little about the underlying process.
Separating the Useful from the Risky
A few practical indicators can help you identify the more credible platforms:
- Licensing and accountability: Check whether the provider holds an Australian Financial Services Licence or partners with one.
- Transparency: Look for clear explanations of how the AI works in their product and how results are validated.
- Education first: The more credible providers help users understand markets rather than promise quick wins.
- Independent verification: Third-party audits or performance reporting provide stronger trust indicators.
- Plain language: The more jargon you see, the less confident you should be.
How to Use AI Responsibly
AI can be great for research, scenario testing and information gathering. But it should never replace independent thinking, due diligence or licensed advice. Use AI to ask better questions, not to make decisions for you.
Compare any AI-generated insights with reliable sources such as ASX announcements, company reports, or government data. Treat it as one input among many, and always apply your own filter.
The Human Element Still Matters
Investing is part numbers, part behaviour. AI can process the numbers, but only humans understand goals, time horizons and tolerance for loss.
Technology will continue to evolve, and AI will keep influencing how financial decisions are made. But trust will still depend on transparency, accountability as well as good judgement.
Final Thoughts
AI has already changed how we access and interpret information, and it may soon reshape how investment portfolios are built. However, no algorithm should replace the basic fundamentals:
- Know what you’re investing in,
- Manage your risk, and
- Never delegate blind trust and automation to a black box.
Technology can assist with research. It cannot guarantee outcomes. The safest investor is still the one who understands where human judgement begins and where algorithms should end.






