As generative artificial intelligence rapidly reshapes industries worldwide, its potential to transform public administration is both profound and far-reaching. Beyond automating services, AI offers governments the opportunity to evolve into adaptive, proactive, and citizen-centered institutions. We explore how AI—particularly chatbots and intelligent agents—may redefine the public sector by 2030. Drawing on global case studies and strategic foresight, it examines the scenarios, challenges, and transformative promise of cognitive government

The advent of generative artificial intelligence (AI) represents a pivotal transformation in the public sector, marking a shift that transcends the simple digitalization of services and processes. Unlike earlier waves of technological innovation that primarily focused on digitizing paperwork or automating specific tasks, generative AI introduces a deeper, structural change. It signals the emergence of cognitive capabilities within public administration—an evolution from static digital infrastructures to dynamic, intelligent systems capable of learning, reasoning, and proactively responding to societal needs.
According to the report Artificial Intelligence and the Public Sector: Scenarios for 2030, published in October 2025 by Anteverti in collaboration with Esade, the integration of intelligent conversational agents—such as chatbots powered by large language models—will play a central role in this transition toward what the authors term “cognitive government.” This concept envisions an administrative apparatus not merely supported by technology, but fundamentally reengineered around it. A cognitive government, in this sense, would leverage AI to anticipate citizens’ needs, offer personalized responses, and continuously adapt its operations based on real-time feedback and data analysis.
The report frames this transformation within a broader shift from digital to cognitive infrastructures. While digital government focused on the dematerialization of services—transferring physical processes to online platforms—cognitive government introduces a more intelligent layer that can analyze data, interpret complex contexts, and make informed decisions. This reconfiguration demands more than technological procurement; it requires a systemic redesign of institutions, underpinned by what the report refers to as the mirror hypothesis: the idea that organizational capacity must mirror technological capability for innovation to be truly effective. In other words, advanced AI tools can only deliver their full potential if public institutions evolve in parallel, both culturally and structurally.
The report outlines three implementation scenarios for chatbot technologies in the public sector: conservative, disruptive, and systemic. In the conservative scenario, chatbots are used as informational assistants, providing basic responses and easing access to public services. The disruptive scenario advances further, introducing transactional capabilities—such as completing procedures, booking appointments, or providing personalized information based on user data. The systemic scenario, the most ambitious, proposes the development of an interoperable ecosystem in which public institutions operate in tandem with private and community actors. In this vision, the government acts as a platform, facilitating seamless and contextual interactions across a network of services. Each scenario entails a different level of technological integration, organizational complexity, and governance challenge, raising crucial questions about risk, equity, and accountability.
Real-world examples from across the globe illustrate the diverse trajectories governments are taking in AI implementation. In Estonia, the Bürokratt initiative is a landmark case: a national digital assistant that enables citizens to interact with multiple public agencies through a single conversational interface. By using a centralized AI system with access to integrated databases, Bürokratt facilitates a form of proactive government—where the state can notify citizens of relevant services or legal obligations without requiring them to initiate the interaction. Similarly, Singapore and Dubai are pioneering multiservice models that integrate chatbots with urban management platforms, transportation, healthcare, and education systems, thereby offering a unified citizen experience underpinned by real-time data and algorithmic intelligence.
Latin America presents a different but equally relevant perspective. In Buenos Aires, the Boti chatbot on WhatsApp provides transactional services to citizens, managing over 300,000 monthly interactions. Its integration into widely used platforms reflects a strategic approach to accessibility and user engagement, particularly in regions where formal digital literacy may be uneven. Spain, while still in earlier stages of adoption, is exploring a variety of decentralized experiments. Cities like Madrid (with Línea Madrid and Clara) and Catalonia (with a shared generative chatbot service) demonstrate the potential for localized innovation. Smaller municipalities such as Las Rozas and Ciudad Real are testing integrations with messaging applications and multilingual support to better serve diverse communities. Despite these initiatives, the report emphasizes that most use cases remain at low maturity levels, signaling a substantial opportunity for growth and standardization.
Beyond the operational enhancements—such as reducing bureaucratic overhead, optimizing workflows, and improving cost-efficiency—AI in the public sector holds transformative promise for democratic values and institutional legitimacy. Generative AI can improve the inclusivity of government by adapting services to different languages, literacy levels, and cultural contexts. It can expand access by offering 24/7 availability and reduce complexity by translating legal or administrative jargon into plain language. At the same time, the deployment of AI raises critical concerns. Without proper regulation and oversight, there is a real risk of reinforcing existing inequalities, undermining privacy rights, or eroding public trust in automated decisions. The opacity of AI algorithms, particularly those based on machine learning, necessitates robust mechanisms for transparency, accountability, and human oversight.
To responsibly transition toward cognitive government, the report proposes a structured roadmap consisting of four phases: informational chatbots, contextual guidance, transactional support, and autonomous agents with decision-making capabilities. Each phase presents technical, organizational, and ethical challenges. Ensuring interoperability between systems, implementing sound data governance frameworks, and developing multidisciplinary teams with expertise in law, technology, and public policy are essential conditions for success. Institutional capacity-building, rather than reliance on external vendors, is highlighted as a strategic imperative. Governments must move beyond treating AI as a product to be acquired and instead cultivate it as an institutional capability to be developed and sustained internally.
The vision for 2030 presented in the report is that of a fully integrated, multimodal government interface—one that replaces fragmented service portals and siloed agencies with fluid, conversational interactions. In such a model, the citizen’s journey through life events (e.g., birth registration, education, employment, healthcare, retirement) is supported by anticipatory and adaptive digital companions. The operationalization of this vision will require governments to adopt open standards, embrace modular system architectures, and foster long-term collaboration with academia, civil society, and the private sector. The examples of Singapore, Taiwan, and Shanghai illustrate that such a transformation is feasible, but only if approached through a coherent and strategic public product mindset.
Ultimately, the report delivers a critical warning: public administrations that fail to align their internal structures with the evolving capacities of generative AI risk falling behind—not just in terms of efficiency, but also in democratic legitimacy and innovation potential. The key issue is not whether to adopt AI, but to what extent institutions are willing to reimagine themselves to harness its full potential. In this organizational mirror, the future of the public sector will be decided in the era of intelligent agents.
