The public sector has a high potential for data and Artificial Intelligence (AI) to have a huge transformative impact. After all, governments have access to tremendous amounts of data and government operations affect everyone in small and large ways every day.
While it is no secret that only rich data catalyses Artificial Intelligence, its adoption among government entities appears to be uneven and generally lags behind the private sector. Many agencies struggle to bridge the gap that exists between their existing IT infrastructures, practices, and the value that new digital technologies make possible. However, for some governments, there are entire departments, or pockets within departments, where adoption is robust, advanced and successful.
Everyone agrees that the massive amounts of digital data generated by citizens’ activity represent an incredibly valuable resource. Unfortunately, the ever-expanding data resource is often underutilised today. Public sector agencies struggle to unlock the value of their data due to outdated legacy systems and limited analytics capabilities, being data-rich but insight-poor. They often grapple with the associated, yet unnecessary, challenges of big data – high costs, poor data quality, and inconsistent data sources and formats – without experiencing any of the enticing benefits.
There are many lessons to draw from the events of COVID-19 but perhaps one of the most critical is the importance of being able to use data to prepare for potential scenarios and inform our decision making. Public sector agencies require a multifaceted approach, including the ability to quickly integrate new data, make accurate, multilevel forecasts, and provide data-driven insights for policymakers.
Against this backdrop, having a robust data and AI strategy in place will help the public sector better harness the power of data.
The prevailing question is: What is the successful path to the adoption and deployment of AI?
There is a mixed picture of AI adoption in government, and it is likely owing to an environment that is often risk-averse, subject to myriad legislative hurdles and vast in its reach. That being the case, the use of AI has expanded beyond discrete use cases and experiments into wider adoption. There are obvious signs which point to the potential explosion of AI adoption even though gaps in capabilities and strategy are apparent.
Pursuing their missions every day, government agencies spend much of their time focused on operational issues. That time-consuming focus is required in government departments and offices that are held accountable for achieving clearly defined missions. If they fall short, the consequences can be devastating – for the citizens they serve, as well as for the government organisation itself.
In that context, it’s easy to see how AI remains a second-tier priority for some government leaders who have operational roles. This presents government leaders with a paradox. Many have no time to fully embrace AI due to everyday demands, but those AI advances could be instrumental in unlocking real, measurable operational improvements that have the effect of reducing strains on resources and giving them more time to fulfil their mission.
In light of this, how can people understand which AI capabilities are most likely to be adopted in government? What are the biggest untapped opportunities for AI adoption in government? What obstacles and challenges unique to the government are most important to understand today to ensure progress tomorrow?
The OpenGovLive! Virtual Breakfast Insight held on 3 December 2021 aimed at imparting knowledge on how government agencies can accelerate, innovate and transform their advanced analytics capabilities, make data an integral part of their decision making and adopt AI to better serve the citizens
Harnessing the game-changing potential of data and AI in government for optimal outcomes
Mohit Sagar, Group Managing Director and Editor-in-Chief, OpenGov Asia, kicked off the session with his opening address.
The world has fundamentally changed and the challenges of these times will require sophisticated solutions to meet the new demands of the world. Without a doubt, technology is a priority and the enabler, Mohit asserts.
Decisions are made every day, but they should not be done blindly. To make informed decisions, people need actionable insights. Today, citizens expect government services to be personalised, intuitive, engaging and anticipatory. To deliver the best citizen experience and stay relevant, having data and universal access to it is the key to transforming organisations.
“Data is like a diamond,” Mohit posits. “Data that is not refined and polished will not produce insights – tools have to be used to achieve that.”
Data fuels AI, Mohit believes. Effectively building and deploying AI and machine learning systems require large data sets. Developing a machine learning algorithm depends on large volumes of data, from which the learning process draws many entities, relationships, and clusters.
As Singapore accelerates its Smart Nation efforts, data will only become a more precious commodity. The nation has unveiled two new programmes to drive the adoption of A) in the government and financial services sectors. It also plans to invest another SG$180 million ($133.31 million) in the national research and innovation strategy to tap the technology in key areas, such as healthcare and education.
The fund is on top of SG$500 million ($370.3 million) the government already has set aside in its Research, Innovation and Enterprise (RIE) 2025 Plan for AI-related activities, said the Smart Nation and Digital Government Office (SNDGO) in a statement in November 2021.
These investments have been earmarked to support various research in areas that address challenges of AI adoption, such as privacy-preserving AI, and areas of societal and economic importance including healthcare, finance, and education. The funds also will facilitate research collaborations with the industry to drive the adoption of AI.
For Mohit, AI will transform every industry and create huge economic value. Technology, like supervised learning, is automation on steroids. It is very good at automating tasks and will have an impact on every sector – from healthcare to manufacturing, logistics and retail.
Beyond a doubt, AI is becoming more commonplace, says Mohit, citing examples such as the Robot dog, Spot, outdoor security robot O-R3 and the multi-purpose all-terrain autonomous robot, or Matar. AI is here to stay. While Singapore has been doing well in AI adoption, the country is still in its infancy – the government is only beginning to harness the technology of AI.
Mohit urged agencies to recognise the beneficial use cases of AI. He reminded the delegates that the complexity of the challenges besetting the world today requires sophisticated solutions. As such, it would be wise for delegates to partner with experts to better place themselves to respond with agility and efficiency in a rapidly evolving world.
Capitalising on the opportunities for AI adoption in government
Dr Steve Bennett, Director, Global Government Practice, SAS, spoke about the different challenges and success in AI for government applications.
Steve shares that the practice of using data to make better decisions was pioneered in government in WWII, giving rise to operations research, defined as “A scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control.”
Today, using data to make better decisions may be identified as Artificial Intelligence, which supports better decisions by training systems to emulate specific human tasks through learning and automation.
Steve observes that AI is an increasing priority for the government – 75% of government managers want to deploy AI to help them “keep up.” At the same time, global government leadership sees an increasing opportunity; 80% of government data is estimated to be in formats not easily leveraged before AI.
He points out several opportunities where AI can make a real difference in how jobs are done in the public sector. In health, AI has been used to promote public health in India, improve cancer outcomes through better decision making in Amsterdam, keep the U.S. food supply safe and make CVOID-19 outbreak predictions that result in targeted policy-making decisions.
It is also extensively used in public safety and security, such as the F-35 predictive maintenance, keeping women safe from gender violence in Spain and reducing judicial case delays. In citizen services, AI has been used to reduce youth recidivism in Oregon and reduce unemployment in Denmark.
Attractive as AI is, there are technical and organisational challenges that public sector employees need to be aware of, Steve observes. He explains that there are two categories of AI challenges.
The first is technical and organisational challenges. AI requires a copious amount of data that is well-organised, clean and in good shape. The data readiness of government agencies needs to be in place before AI models can be trained.
Apart from that, there is also a skill gap in the government. Public sector employees need to understand how the models work so that they can understand when they can trust and challenge the model. Then there are also cultural realities, such as leaders who are not ready to accept the insights that come from AI models.
The second category of challenges comes from legal, ethical and societal challenges. There are geopolitical concerns, issues of ethics and values, as well as legal implications related to AI adoption.
In summary, Steve reiterates that the complex problems of today herald a time of change. To stay relevant and efficient to citizens, government agencies need to understand the benefits and considerations of using technology and harness it accordingly.
Deploying AI in government services
Frederic R Clarke, Principal Data Scientist and Director, Machine Intelligence & Novel Data Sources (MINDS), Australian Bureau of Statistics, spoke next on the use case of his agency’s effort in unlocking data to support Australia’s effort in managing the COVID-19 pandemic. It involves using integrated and multisource data and machine intelligence to derive new insights on the economic and social impact of the pandemic.
According to Frederic, federal state and territory governments seek to understand both the transient and enduring impact of the pandemic so that they can better target policies that assist Australia’s recovery in the aftermath. The pandemic is not a singular disruptive event, he says, it is a series of connected crises of varying duration that plays out on a local, national and global scale. It has amplified many existing problems while creating new ones.
For Frederic, a complex problem like a pandemic cannot be understood from a single perspective or a single source of data. The fundamental challenge is that the effects of the pandemic are deeply interconnected, dynamic and multifactorial. To connect the dots across a broad canvas of interrelated economic and social factors, policy analysts need a dynamic multisource-evidence base and new analytical techniques.
The economies in society form complex systems, Frederic asserts. As a result, public policy is fraught with problems that are notoriously difficult to isolate and objectively specify – complex systems do not yield to the familiar linear analytical techniques based on reduction principles.
Frederic opines that the interrelated web of problems is like a spider’s web – one intervention tugs on the interrelated web of issues and can have a ripple effect that can create unintended consequences in many other areas. He suggests that these considerations are not specific to the pandemic but a general set of policy concerns that cut across traditional portfolios and jurisdictions.
Frederic shares that 3 paradigms underpin the analytical approach in Frederic’s organisation.
- Data analysis is citizen-centric: The focus is on a system-wide framing in an analytical context
- Analysis is iterative: Defining the problem is part of the problem. There is a need to align with the objectives and changing needs of the policymakers at every stage and set directions for their analysis based on previous results
- Producing analyses that give integrated data: Since no single source of data can provide all the observations that can address informational needs in a complex policy space, being able to combine data sources is critical.
Frederic uses the example of the Australian government studying the impact of the pandemic on jobs and employment. To do so, they modelled the labour market as a system of connected entities – businesses, persons, households, jobs, locations, etc. – that interact through different types of relationships. Then, the concepts, entities, relationships and associated metadata are represented and stored in a knowledge graph. They use automated reasoning and machine learning approaches to integrate data and find new insights.
Believing firmly in the use of AI, Frederic encouraged the use of AI in government services that can help to drastically improve decisions making through high-quality insights.
After the informative presentations, delegates participated in interactive discussions facilitated by polling questions. This activity is designed to provide live-audience interaction, promote engagement, hear real-life experiences, and facilitate discussions that impart professional learning and development for participants.
The first poll inquired on the percentage of overall IT investment that delegates foresee being committed to data and AI deployment over the next 2 years. Just over half (54%) of the delegates felt 10% – 30% of their IT investment would go into data and AI deployment. About 42% predicted that between 30% – 50% would be allocated while 4% said more than 50% would be deployed.
When asked about their biggest challenge in terms of data analytics, most delegates indicated the lack of skilled staff who understand big data analysis (61%) as the biggest challenge. The rest of the delegates were either not able to derive meaningful insights through data analytics (17%), lack of quality data and proper data storage (17%) or were not able to synchronise disparate data sources (5%).
Delegates shared the sentiment that data seemed to be understood only by a few. Getting everyone to produce insights is a “management challenge,” one delegate opines. There is a gap between the data scientist and departments, as well as the lack of knowledge to ask the right questions. There were also other challenges such as the lack of domain knowledge among the data scientists and having to manage a huge amount of data and legacy systems.
In response to these challenges, Frederic shared that his organisation’s strategy is to build data science teams that consist of domain experts. They do not expect that a single data scientist will have the full array of technical skills and domain knowledge. On the volume of data, he suggests the need to look at computing platforms as part of the capability. To analyse data, it is not merely the mathematical and statistical expertise. People need the tools to process a large volume of data.
In ranking the biggest challenge they face when implementing their AI strategy, almost half (46%) went with lack of properly skilled teams. Other delegates found the inflexible business processes and teams (21%), the lack of availability of data (21%), ineffective project management/governance (8%) and ineffective third-party partners (4%) as their biggest challenges.
Participants expressed a range of responses such as the culture of pushback when it comes to AI adoption, having the right skill set to achieve certain objectives, data classification frameworks, compliance requirements and high cost.
As far as cost goes, Steve offeree his experience of extending algorithmic techniques to take small amounts of data and artificially build and sample training data sets out of small data sets.
Frederic echoed Steve’s point and asserted that sampling is a powerful strategy. However, the issue lies in being able to sample without introducing biases, such that the model can return results that reflect the presence or absence of characteristics. He also posited the idea of agile development for producing analytical and statistical results. He opines that the challenges of implementing AI are never singular – it involves the capabilities of multidimensional teams and issues of cloud deployment.
Frederic expanded on the considerations surrounding sampling. “It depends on your purpose,” he says. For instance, in the case of generating classification sets through coding and mapping responses to the code, there is no need to include all the data in basic questions. In those cases, the model integrity and model accuracy depends on choosing the right set of training cases. However, if the purpose is for analytics and exploratory model building, one needs to be very careful in the application of sampling, since one may not know what is to be tested or what could be found.
On the most common use case of AI in their organisation, delegates were almost equally divided between developing smarter products or services (28%), driving intelligent business processes (24%), automating repetitive tasks (24%) and developing a more personalised relationship with stakeholders (24%).
On whether AI adoption remains a second-tier priority in the face of pressing requirements to deliver critical services, more than half of the delegates (56%) indicated that the lack of required skill sets is hindering the desired adoption. Other delegates indicated that AI has not been fully embraced due to everyday demands (28%) or that there is not enough budget to deploy the required AI solutions (11%). The remainder (5%) said AI adoption takes a back seat for some government leaders who have operational roles.
Besides the issue of privacy and security, some delegates felt that there is a lack of value proposition that businesses can come up with. Another delegate opined that organisations should not only look at people who develop AI but the managerial capability in understanding the potential and limitations of AI.
Mohit echoes that point of view and asserts the need to raise the skill set internally, but that the deeper insights require bringing experts from the outside.
On the most important ingredient for successful and wider AI adoption in the public sector, more than half of the delegates (55%) indicated that starting small and building the business case by demonstrating initial wins is the most important. That is followed by the belief in aligning all departments on the single vision and garnering support (20%), establishing clear lines of authority and ownership across the entire organisation (15%) and other considerations (10%).
The final poll asked delegates for their thoughts on the essential tenet for ethical AI to work. Most of the delegates believe in the need for an effective and practical ethical framework/ governance model for AI (56%), followed by the belief that AI solutions should allow for audibility and traceability (22%) and training AI models with carefully-assessed and representative data (11%).
In closing, Steve expressed his gratitude towards everyone for their participation and highly energetic discussion. Delegates believe that AI can make a difference in tailoring benefits for citizens and generating incredible insights. However, being able to manage the challenges of the lack of data or ethical considerations are important hurdles to cross.
He highlighted Frederic’s point about the application of agile approaches to insights delivery and reiterated that some of the best practices for AI adoption are in starting small and having transparency and audibility in the data.
Steve emphasised the edge AI can offer organisations in their journey towards delivering better government services. He reiterated that the digital transformation is an ongoing and collaborative journey and encouraged the delegates to connect with him and the team to explore ways in which AI can help agencies improve their operations.