Proceed with caution: Three questions for Australian governments to answer as they consider gen AI

Generative artificial intelligence (gen AI) could add global economic value of as much as A$6.7 trillion a year. Combined with automation, it could also reshape the future of work, leading to an estimated 1.3 million job transitions in Australia1 by 2030, while noticeably boosting productivity. Through efficiency gains, gen AI alone could contribute as much as 1.1 percent in additional private-sector productivity. We estimate that adopting AI and automation could add an additional $170 billion to $600 billion a year to Australia’s GDP by 2030. The Minister for Industry and Science Ed Husic cited this estimate within the recent Australian Government’s interim response to its consultation on the topic.

What is getting less attention is the application of gen AI in the public sector. There are risks, of course, but the potential for federal, state, territory and local governments to serve their people better is substantial. A wide range of agencies could use gen AI tools to automate the initial drafting of documents and generally improve citizen services. Health departments, for example, could use gen AI to improve claims management, policy evaluation, and personalised outreach. Welfare departments are already using it to provide assistance in claims submission. Such innovation can have important second- and third-order effects, such as speeding up benefits delivery, reducing repetitive work for staff and improving accuracy of payments. In addition, there remain of course many powerful use cases for “traditional AI” (non-generative).

Because government is complex, so too will be the decisions around deploying gen AI. That is one reason why Australia’s Commonwealth Scientific and Industrial Research Organisation (CSIRO) is collaborating with the US National Science Foundation on how to deploy it both effectively and ethically. As for public-sector leaders, they can clarify their options by addressing the following three questions:

How can we use gen AI to improve citizen service?

Gen AI isn’t the right fit for every service or department, so it’s important to identify the most promising use cases and then set priorities based on their potential impact and feasibility. At first – and in line with the government’s interim response – it may be prudent to avoid use cases that carry a high potential for risk and/or a limited tolerance for errors. Examples of early adoption could include enabling chatbots to help residents navigate government services, such as getting a driver’s license or registering a property title; summarising citizen feedback from hotlines and assisting call centre operators; automating internal processes without arriving at decisions; improving the quality of data or providing personalised crop management advisories.

How can we address the potential risks?

General risks for everyone include unpredictability, inaccuracy, bias, and cybersecurity. In addition, employees may see gen AI as a threat. We believe it is better seen as an opportunity to make their work lives more interesting, by automating repeatable tasks and enabling people to focus on more important, ‘only human’ aspects of their mission.

For governments, there are particular risks, such as the loss of confidential data or compromises of national security. To help address these issues, the Australian Government has introduced a voluntary AI ethical framework for AI safety and reliability. It also joined the EU and 27 countries in signing the Bletchley Declaration, committing to international collaboration on AI safety and risk-based frameworks. The NSW and Queensland state governments, for example have issued similar guidelines. These efforts are all works in progress, which makes sense, because AI and gen AI is evolving so quickly.

The value of communication cannot be overstated. Some of the biggest roadblocks to gen AI adoption – particularly amongst use cases deemed to be low-risk – will continue to be mindsets and behaviours that are resistant to change; people may fear job displacement or not trust the technology. That is why it is essential to communicate a clear and consistent understanding of its possibilities.

How should we scale gen AI?

A useful principle is to start small and safe, experimenting with ways to identify gen AI risk parameters and preparing internal policies, guidelines, and controls to mitigate them. In creating these controls, humans should always be in the loop, with technology used as an aid while the highest-value and lowest risk use cases become increasingly implemented. At the same time, early discovery work can commence on longer lead-time, but larger-scale impact use cases.

Gen AI technology alone will not solve fundamental problems, and the level of change management required is significant. For every dollar governments spend on gen AI technology, more will need to be spent on transition, such as incorporating gen AI into routine processes and building staff capability to manage and improve the technology over time.

There is also the matter of talent. The CSIRO estimated that 1.2 percent of all job postings in 2022 were AI-related, compared to 0.5 percent from 2015-19. The public sector has often found it difficult to hire and keep AI specialists. Hiring and upskilling will need to step up, and talented engineers and scientists will need to be given dynamic problems to solve in order to be gainfully retained.


Make no mistake: gen AI is has arrived in Australia. That could be a very good thing—for government agencies, the people they serve, and for budgets. Addressing these three questions could improve the odds of success.

Aldous Mitchell, Partner, works with governments internationally at state and federal level in assessing and deploying gen AI. Damien Bruce is a Senior Partner in our Melbourne office and leads our Public, Social and Health Sector in Asia.

1 We define a job transition as a change in occupation, including a change to a new occupation in the same occupation group, due to decreased future demand.