Artificial intelligence can be a powerful tool in ESG reporting, but only when it’s used with a solid understanding of sustainability frameworks. For companies still getting up to speed, relying too heavily on AI can lead to serious missteps in your disclosures.
AI is now integrated into many business processes, and for good reason. In my own consulting work, I use it often to explore industry trends, compare metrics, and streamline repetitive tasks. But ESG reporting is different. It’s governed by specific standards, and doing it well requires more than just good writing- it requires judgment, context, and human oversight.
As we approach reporting season, many companies are drafting their annual ESG disclosures. Some may be tempted to turn to AI tools like ChatGPT to speed up the process. But asking an AI to generate your material ESG risks and opportunities-or even worse, to write the report in full-could leave you with a document that doesn’t stand up to scrutiny from investors, regulators, or your board.
Here are three of the most common pitfalls I see when companies use AI to support ESG reporting, and what to do instead.
1. Value Chain Reporting
AI can’t understand your value chain the way you do.
Some of the leading ESG frameworks—like those aligned with Australia’s climate-related financial disclosures—expect companies to consider the full value chain when identifying material impacts, risks and opportunities.
This includes not only your direct operations, but your upstream suppliers and downstream stakeholders. Frameworks like IFRS S2 and the GHG Protocol require that you map out dependencies, risks, and impacts across these interconnected relationships.
But this is no small task. Understanding your value chain typically involves cross-functional collaboration, multiple workshops, and input from subject matter experts.
→ Where AI goes wrong:
AI can surface general trends in your industry, but it won’t be able to reflect the specific risks or dependencies in your business model. It doesn’t know your supply chain, your customer base, or your regional context.
→ A better approach:
Manually map your value chain first. Then, use AI to explore possible material risks or opportunities across different components. Use it as a research assistant- not a decision-maker.
2. Describing Risks, Opportunities and Impacts
AI often confuses the basics.
Sustainability frameworks distinguish between risks, impacts, and opportunities-and whether those are positive or negative. These distinctions matter, especially when you’re aligning with frameworks like IFRS S1/S2 or the GRI Standards.
AI often blends these concepts. For example, it might treat a “positive impact” as a business opportunity, or confuse financial risks with environmental impacts. It also tends to misunderstand the term “material,” which is defined differently depending on the framework.
→ Where AI goes wrong:
Without clear instructions, AI-generated content can misrepresent how a topic affects your business or the environment. These errors are easy to spot for sustainability professionals and can undermine the credibility of your report.
→ A better approach:
If you’re using AI to draft or explore risks and impacts, you’ll likely need to train it with examples that align to the specific frameworks you’re using. But in most cases, it’s better to work with a consultant trained in these frameworks to get it right from the start.
3. Determining What’s Material
Materiality is not something you should outsource to AI.
Determining what’s material is a process that involves stakeholder input, leadership review, and often board-level oversight. You can’t simply ask ChatGPT, “What are our material ESG topics?” and expect a list that will hold up to investor expectations.
Arriving at materiality involves judgment, context, and discussion—and that’s where human involvement is non-negotiable.
→ Where AI goes wrong:
AI doesn’t know your company’s strategic priorities, risk tolerance, stakeholder concerns, or evolving business environment. Any list of risks or opportunities it produces is generic and potentially misleading.
→ A better approach:
Use AI to suggest metrics or benchmarks that might support your analysis. But the process of assessing significance, gathering input, and aligning on final material topics should always be led by people within your organisation.
“AI is a powerful assistant- but when it comes to ESG reporting, human insight still drives credibility. Investors and regulators can tell when you’ve done the work- and when you haven’t.”