Policy research

Assessing risk identification and mitigation in Indonesia’s National AI Roadmap

Nur Alam Hasabie, Ardy Haroen · 2025

Overview

This work compares risks and mitigation strategies identified in Indonesia’s National AI Roadmap whitepaper (Komdigi) with the MIT AI Risk Repository and MIT’s mapping of mitigation strategies, using an LLM-assisted pass to scale coverage across large corpora.

Independent research; not an official government statement. Results should be interpreted with methodological limitations in mind (see Methodology).

Open data & code (MIT License)

Findings

Charts and tables list every row from the published gap analysis (causal taxonomy, domain taxonomy, and mitigation mapping). Table rows follow the MIT AI Risk Repository taxonomy order (they are not sorted by coverage). Values match the paper; scroll the tables for full category names.

The radar shows coverage % (share of risks mapped) per category. Use the checkboxes to include or exclude rows from the chart; tables always list all rows.

Risk identification — causal taxonomy (full table)

Full data table
In chart Category Covered Gap Total Coverage % Gap % (of total)
Entity · 1 - Human 326 210 536 60.8 39.2
Entity · 2 - AI 284 289 573 49.6 50.4
Entity · 3 - Other 138 141 279 49.5 50.5
Entity · 4 - Not coded 47 171 218 21.6 78.4
Entity · Unknown 155 476 631 24.6 75.4
Intent · 1 - Intentional 294 180 474 62 38
Intent · 2 - Unintentional 260 226 486 53.5 46.5
Intent · 3 - Other 194 234 428 45.3 54.7
Intent · 4 - Not coded 47 171 218 21.6 78.4
Intent · Unknown 155 476 631 24.6 75.4
Timing · 1 - Pre-deployment 78 103 181 43.1 56.9
Timing · 2 - Post-deployment 528 334 862 61.3 38.7
Timing · 3 - Other 145 205 350 41.4 58.6
Timing · 4 - Not coded 44 169 213 20.7 79.3
Timing · Unknown 155 476 631 24.6 75.4

Risk identification — domain taxonomy (full table)

Full data table
In chart Category Covered Gap Total Coverage % Gap % (of total)
1. Discrimination & Toxicity — 1.0 Discrimination & Toxicity 7 0 7 100 0
1. Discrimination & Toxicity — 1.1 Unfair discrimination and misrepresentation 74 1 75 98.7 1.3
1. Discrimination & Toxicity — 1.2 Exposure to toxic content 23 83 106 21.7 78.3
1. Discrimination & Toxicity — 1.3 Unequal performance across groups 14 2 16 87.5 12.5
2. Privacy & Security — 2.0 Privacy & Security 7 0 7 100 0
2. Privacy & Security — 2.1 Compromise of privacy by leaking or correctly inferring sensitive information 68 0 68 100 0
2. Privacy & Security — 2.2 AI system security vulnerabilities and attacks 58 40 98 59.2 40.8
3. Misinformation — 3.0 Misinformation 4 1 5 80 20
3. Misinformation — 3.1 False or misleading information 38 3 41 92.7 7.3
3. Misinformation — 3.2 Pollution of information ecosystem and loss of consensus reality 18 0 18 100 0
4. Malicious Actors & Misuse — 4.0 Malicious use 14 5 19 73.7 26.3
4. Malicious Actors & Misuse — 4.1 Disinformation, surveillance, and influence at scale 77 0 77 100 0
4. Malicious Actors & Misuse — 4.2 Cyberattacks, weapon development or use, and mass harm 57 12 69 82.6 17.4
4. Malicious Actors & Misuse — 4.3 Fraud, scams, and targeted manipulation 50 18 68 73.5 26.5
5. Human-Computer Interaction — 5.1 Overreliance and unsafe use 38 12 50 76 24
5. Human-Computer Interaction — 5.2 Loss of human agency and autonomy 18 19 37 48.6 51.4
6. Socioeconomic and Environmental — 6.0 Socioeconomic & Environmental 10 9 19 52.6 47.4
6. Socioeconomic and Environmental — 6.1 Power centralization and unfair distribution of benefits 14 36 50 28 72
6. Socioeconomic and Environmental — 6.2 Increased inequality and decline in employment quality 36 9 45 80 20
6. Socioeconomic and Environmental — 6.3 Economic and cultural devaluation of human effort 28 2 30 93.3 6.7
6. Socioeconomic and Environmental — 6.4 Competitive dynamics 0 20 20 0 100
6. Socioeconomic and Environmental — 6.5 Governance failure 4 55 59 6.8 93.2
6. Socioeconomic and Environmental — 6.6 Environmental harm 36 16 52 69.2 30.8
7. AI System Safety, Failures, & Limitations — 7.0 AI system safety, failures, & limitations 5 14 19 26.3 73.7
7. AI System Safety, Failures, & Limitations — 7.1 AI pursuing its own goals in conflict with human goals or values 9 81 90 10 90
7. AI System Safety, Failures, & Limitations — 7.2 AI possessing dangerous capabilities 14 44 58 24.1 75.9
7. AI System Safety, Failures, & Limitations — 7.3 Lack of capability or robustness 25 89 114 21.9 78.1
7. AI System Safety, Failures, & Limitations — 7.4 Lack of transparency or interpretability 3 38 41 7.3 92.7
7. AI System Safety, Failures, & Limitations — 7.5 AI welfare and rights 0 3 3 0 100
7. AI System Safety, Failures, & Limitations — 7.6 Multi-agent risks 4 42 46 8.7 91.3
Unknown — Unknown 155 473 628 24.7 75.3
Unknown — X.1 Excluded 42 160 202 20.8 79.2

Risk mitigation — full MIT mitigation mapping rows

Full data table
In chart Category Covered Gap Total Coverage % Gap % (of total)
Governance — 1.1 Board Structure & Oversight 0 6 6 0 100
Governance — 1.2 Risk Management 0 15 15 0 100
Governance — 1.3 Conflict of Interest Protections 0 1 1 0 100
Governance — 1.4 Whistleblower Reporting & Protection 0 3 3 0 100
Governance — 1.5 Safety Decision Frameworks 0 6 6 0 100
Governance — 1.6 Environmental Impact Management 0 3 3 0 100
Governance — 1.7 Societal Impact Assessment 0 6 6 0 100
Model Controls — 2.1 Model & Infrastructure Security 0 6 6 0 100
Model Controls — 2.2 Model Alignment 0 1 1 0 100
Model Controls — 2.3 Model Safety Engineering 0 2 2 0 100
Model Controls — 2.4 Content Safety Controls 0 4 4 0 100
Operations — 3.1 Testing & Auditing 1 7 8 12.5 87.5
Operations — 3.2 Data Governance 0 3 3 0 100
Operations — 3.3 Access Management 0 2 2 0 100
Operations — 3.4 Staged Deployment 0 1 1 0 100
Operations — 3.5 Post-deployment Monitoring 2 4 6 33.3 66.7
Operations — 3.6 Incident Response & Recovery 0 1 1 0 100
Transparency — 4.1 System Documentation 0 6 6 0 100
Transparency — 4.2 Risk Disclosure 1 1 2 50 50
Transparency — 4.3 Incident Reporting 4 0 4 100 0
Transparency — 4.4 Governance Disclosure 1 2 3 33.3 66.7
Transparency — 4.5 Third-Party System Access 0 1 1 0 100
Transparency — 4.6 User Rights & Recourse 1 4 5 20 80
Other — X.X Control not otherwise categorized 0 5 5 0 100

Policy-relevant takeaways

  • Several privacy- and misinformation-related themes show relatively high mapped coverage in risk identification compared with other domains in the full domain taxonomy table.
  • Some AI system safety and agency themes show large mapped gaps relative to the MIT risk repository rows analyzed.
  • Mitigation mapping suggests many governance and operational control themes in the MIT corpus are not explicitly mirrored in the whitepaper’s mitigation list — a scale mismatch worth interpreting carefully (corpus size differs).
  • LLM-assisted matching requires calibration and human review; treat borderline mappings as hypotheses, not ground truth.

Methodology & limitations

Risks and mitigations are drawn from the MIT AI Risk Repository and MIT mitigation mapping, then checked against risks and mitigation themes identified in Indonesia’s National AI Roadmap whitepaper. An LLM (microsoft/phi-4) is used to scale pairwise coverage judgments; future work may add cross-model calibration and human verification. Coverage is sensitive to prompt design, threshold choices, and interpretation of “mapped” versus “implicitly related”.

Narrative analysis (paper themes)

The paper discusses cases where misinformation is discussed but hallucination as a distinct mechanism may be under-specified; downstream mitigation may need recalibration if intentional vs unintentional misinformation is treated differently.

Cybersecurity-linked misuse is discussed in places, while some adversarial misuse themes (e.g., certain weaponization framings) may be less explicit — context and probability matter for prioritization.

Power centralization and non-material inequality may be under-addressed relative to access-framed inequality — a theme that can matter in Indonesia’s political economy context.

References & further reading