Skip to main content
  • Follow us
map
Ambitious AI, grounded in governance: how GOPA balances innovation with standards such as the EU AI Act and GDPR.

Responsible AI in international cooperation: GOPA's group-wide approach

A government partner asks: can AI shorten the time it takes to answer a citizen's request from weeks to hours? An energy ministry wants to know if machine learning can keep the grid stable as renewables scale. An agricultural extension service needs to reach twice as many farmers without doubling its staff.

These are not hypothetical questions. They arrive regularly from partners across Africa, Asia, the Balkans and Latin America. Each one carries an implicit second question: can we do this responsibly?

GOPA wants the answer to both to be yes – but only under specific conditions. AI must be embedded in institutions, not just demonstrated in pilots. Solutions must work within real constraints: connectivity, language diversity, limited technical teams, legacy systems. And experimentation must not create hidden technical, legal or ethical debt.

This is why GOPA established the AI Studio in 2023: a group-wide hub to identify where AI adds genuine value and to make sure new applications are developed under demanding governance standards. The framework described in this article is being rolled out across GOPA projects and will continue to evolve as regulation and practice develop. 

Standards as compass

GOPA's AI work references four layers of regulation:

  1. EU AI Act – risk classification, transparency and conformity requirements
  2. GDPR – data protection by design and by default
  3. OECD AI Principles – accountability, transparency, security, human oversight
  4. Partner-country law – local data protection, sectoral regulation, institutional mandates

These frameworks provide a common language for EU donors and international organisations. They also future-proof solutions against tightening regulation.

However, many partners outside the EU are not bound by EU law and do not want to navigate it. GOPA addresses this through a two-track approach:

  • Internal compliance: GOPA is designing its policies, processes and tools so they are compatible with EU AI Act and GDPR requirements.
  • External adaptation: GOPA translates these principles into locally relevant safeguards, so partners benefit from robust governance without unnecessary bureaucracy.

The result: GOPA does the regulatory work once; partners focus on solving problems. 

How the AI Studio operates

The AI Studio serves as shared infrastructure across GOPA's business units. It provides:

  • data engineering and machine learning expertise,
  • security and governance guidelines, and
  • standardised templates for risk assessment and documentation.

Business units contribute sector knowledge: how grid operators manage load balancing, which steps matter in court procedures, how social protection systems determine eligibility, how agricultural advisers reach remote communities, how communication strategies reach and engage diverse audiences.

When an AI application emerges, the Studio and sector team work through three questions: 

  1. Is AI the right tool? Could simpler analytics or process redesign achieve the same outcome?
  2. What changes for people and institutions? Who uses the tool? Who remains accountable for decisions?
  3. What does “responsible” mean here? Which standards apply – EU, local or both?

This approach keeps innovation anchored in operational reality. 

Four design commitments

These commitments are intended to define how GOPA designs and deploys AI across sectors and geographies.
  1. Human responsibility by design 

AI supports decisions; it does not carry responsibility. GOPA applies three rules:

  • AI components flag, suggest or prioritise – they do not decide autonomously in sensitive contexts.
  • Roles are explicit: which steps are automated, which remain human, where escalation is possible.
  • Significant AI-assisted actions are traceable through logs and documentation.

Example: procurement integrity

GOPA has supported public procurement reform in Southeast Asia, Africa and the Balkans for over two decades. In an AI-supported procurement system, algorithms can flag unusual patterns across thousands of tenders – pricing anomalies, timing clusters, supplier concentration. Under GOPA's approach, these are alerts for auditors, not automated sanctions. Human experts review context, apply national law and decide follow-up.

For EU donors, this aligns with the AI Act's high-risk requirements. For local authorities, it offers a practical safeguard: AI makes audit work more targeted without replacing legal judgement. 

  1. Respect for data and privacy

Data is often the most sensitive element of any AI project. GOPA uses GDPR-level safeguards as a quality benchmark when designing AI solutions and adapts them to local frameworks:

  • collect only data that is actually needed (data minimisation),
  • state clearly for which purposes data is used (purpose limitation),
  • keep personal and sensitive data secure and access-controlled, and
  • delete or anonymise data no longer required (storage limitation).

GOPA's role is to translate privacy principles into concrete design choices, so partners do not need to become experts in EU data law.

Example: agricultural advisory

A digital advisory tool for farmers might need information about location, crops and farm size – but not names or precise addresses. GOPA's approach: process data on devices or local servers where possible, pseudonymise data that must be transmitted, apply clear retention rules so detailed histories are not kept longer than necessary. 

  1. Fairness and local context

AI models can behave differently across languages, regions or demographic groups. GOPA's commitment is not to promise perfect neutrality, but to make bias visible, manageable and reducible.

In practice:

  • test tools on different user groups and languages where data allows,
  • check not only technical performance but also tone, completeness and consistency, and
  • document known limitations in terms non-technical decision-makers can understand.

Example: multilingual citizen services

When testing a multilingual citizen service chatbot, GOPA might find that responses in a widely spoken language are more detailed than those in a local language. Under GOPA's approach, this triggers corrective action: adjusting training data, using specialised language models, or building in compensating mechanisms such as easy escalation to human agents. This is as much about fair access to information and communication as it is about technical performance. 

  1. Lean and sustainable technology

AI is not automatically the best or most sustainable option. Large models and complex architectures can be expensive, difficult to maintain and energy-intensive.

GOPA commits to:

  • start with the simplest solution that reliably solves the problem,
  • choose models and infrastructure that local partners can realistically operate, and
  • consider energy and cost implications when comparing technical options.

Sometimes the right answer is not “more AI” but a clearer process, a smaller model or a conventional information system. This is especially important where budgets and technical capacity are constrained. 

Sector expertise and current focus areas

GOPA's structure makes this approach possible. The AI Studio works with teams that have operated on the ground for years and bring deep knowledge of institutions, sectors and audiences in partner countries.

On this basis, GOPA teams are exploring and designing AI applications across five areas: 

  • Energy and infrastructure: Renewable production forecasting, grid optimisation, predictive maintenance
  • Governance and public finance: Procurement data analysis, legislative drafting support, citizen information services
  • Agriculture and environment: Satellite data integration for climate resilience, advisory system enhancement
  • Education and social development: Language tools, skills matching, data-driven monitoring and evaluation
  • Communications and stakeholder engagement: Multilingual content adaptation, summarising complex reports for different audiences, synthesis of feedback from consultations to inform programme communication 

Each application will test and refine the governance approach, generating lessons for different country contexts as concepts move into implementation. When donors and partners ask how GOPA manages AI risks, the answer is not a generic framework but a conversation grounded in specific institutions, processes and legal contexts. 

Conclusion: ambition with safeguards

GOPA does not treat AI governance as a constraint on innovation. It is the condition for sustained innovation in complex environments.

Clients – government ministries, EU institutions, international organisations, development banks, private operators – should not have to choose between ambitious AI applications and accountability obligations. By building on standards such as the EU AI Act and GDPR, and adapting them to local realities, GOPA aims to offer both: AI that delivers results within demanding safeguards.

The AI Studio is starting to make this concrete through readiness assessments, governance frameworks and sector-specific pilots. The approach will continue to evolve as regulation develops and practice generates new lessons – but the direction is clear.

The goal is to use AI where it genuinely helps institutions and communities, in ways that GOPA and its partners can stand behind.