KURGAN System and Turkey's AI-Supported Public Auditing
- Duygu Şener

- Oct 29
- 7 min read

Launched by the Republic of Turkey's Tax Inspection Board (VDK) in 2025, KURGAN (Kuruluş Gözetimli Analiz Sistemi – Institution-Supervised Analysis System) performs anomaly detection by scanning e-invoice, e-ledger, and banking data through large-scale relational analysis. KURGAN scans all purchase–sale transactions in the current and previous periods and instantly identifies those deemed “risky.” In this process, many digital data points—such as sales–purchase records, shipment information, bank payments, and e-signature timestamps—are evaluated. According to the VDK guide, KURGAN is “a central tax risk analysis system that operates with numerical and electronic data” and sends notices (requests for information) regarding taxpayers’ risk status. One important emphasis here is that KURGAN is a risk reporter, not a final decision-maker. As the Anadolu Agency (AA) reported, transactions flagged by the system as “risky” do not automatically place sellers and buyers into the status of “issuing fake invoices.” KURGAN’s findings are conveyed to taxpayers as early warnings; thus, when there is a possibility of an audited transaction, taxpayers have the opportunity to review the situation at an early stage. In this respect, KURGAN is a tool designed as an “AI-supported anomaly detector,” aiming to improve the institution’s decision processes and promote a culture of voluntary compliance.
KURGAN is an example of AI-based anomaly detection and real-time risk analysis in public auditing. Using machine learning techniques on large databases, the system anticipates fraud and non-compliance risks in advance. In this way, the VDK sends information request letters to taxpayers for the risky transactions it identifies and uses taxpayer responses as inputs for audit plans. From the perspective of CIOs, reliability is paramount here: precaution against the false positive (error) potential of AI systems is ensured strictly through human auditor intervention. In audits to be conducted after the risk reports derived from KURGAN data, all final assessments belong to human auditors. Indeed, one of the points repeatedly emphasized is this: KURGAN’s findings are merely alerts indicating the need for examination and do not impose a “fake invoice issuance” ruling on any taxpayer.
As an anomaly detection system in auditing, KURGAN’s architectural approach rests on the diversity and quality of data sources. Advanced AI models use large volumes of structured and unstructured data. At this point, it is critically important that the data be accurate, up-to-date, and complete. Especially in AI models that work with public data, data provenance must be transparent; the source of each piece of information and the manner in which it is processed must be traceable. ISO 42001 also emphasizes the necessity of data quality and data lineage. In the case of KURGAN, official source data such as e-invoice, e-ledger, and bank records feed into a central risk analysis system; the risk scores produced based on the accounting data obtained directly influence audit strategies. In short, without a strong data infrastructure, AI predictions can be misleading; therefore, data cleansing, dimensionality reduction, and pre-training preparation steps are implemented meticulously (as highlighted, for example, in INTOSAI’s big data guidance).
ISO 42001 and 27001: AI Governance and Security Frameworks
For the safe and responsible use of AI in public institutions, AI management systems have become critical. The ISO/IEC 42001:2023 standard provides requirements covering risk management, ethical principles, and transparency throughout the AI life cycle. According to the OECD, ISO 42001 addresses challenges such as AI ethics, transparency, and continuous learning and provides organizations with “a structured framework to manage AI-related risks and balance innovation with governance.” For example, the Nemko Digital guide notes that ISO 42001 applies AI-specific risk management, including model transparency, data quality assurance, algorithmic transparency, and bias detection. At its core, ISO 42001 involves governance processes establishing that AI algorithms are developed in a fair and responsible manner; iterative testing, impact assessments, and responsible stakeholder engagement are ensured.
ISO 27001, in turn, complements this governance by incorporating traditional information security controls. According to Nemko Digital, Annex A controls of ISO 27001 include measures such as network segmentation, data encryption, identity and access management, and incident response plans. ISO 42001’s requirements for “data quality and traceability” are supported by ISO 27001’s controls that protect data through encryption and access logs. For example, ISO 42001’s requirement for a secure development environment is implemented with ISO 27001 through “secure development environments and network isolation.” Thus, AI systems such as KURGAN ensure that models operate within a holistic security and governance framework. Applying ISO 42001 and ISO 27001 as an integrated management system significantly reduces institutional risk in public organizations by guaranteeing both ethical AI governance and cybersecurity.
Model Governance, Explainability, and Human Oversight
Model governance principles are required for the reliability of AI models. This covers the entire process—from model version management and performance monitoring to testable algorithms such as logistic regressions. In the KURGAN example, the outputs of the machine learning algorithms used (typically supervised learning methods) must be regularly checked by human experts, and “black-box” decisions must not be formalized. Model explainability must be ensured, and human review must be mandatory in critical audit decisions (AI outputs do not promise 100% success). The Turkish Data Protection Authority’s (KVKK) recommendations on AI also support this approach: it is specifically emphasized that “the role of human intervention in decision-making processes” should be determined. In short, the final decision on the risks reported by KURGAN (for example, initiating a tax audit) is always made by a human.
In the context of explainability, transparent documentation of internal processes and accountability mechanisms play a critical role. In AI projects that process personal data, KVKK recommends carrying out a prior Data Protection Impact Assessment (DPIA) and anonymizing the data where possible. This ensures the compliance of the data used by KURGAN with legislation while also supporting transparency in actions based on model outputs. In addition, the European Data Protection Board (EDPB) has reminded that under the GDPR, AI technologies must be designed to protect personal data: the goal is for AI innovations to occur “ethically, safely, and in full respect of the GDPR.” With open data definitions and security documentation specific to KURGAN, even users who may be considered content owners should be able, where necessary, to review how the system works and to challenge the results. According to ISO 42001, change management and documentation for models are also mandatory, which increases accountability in audits.
Legal Regulations: KVKK, GDPR, CMK, and HMK
The use of AI in public auditing is under strict legal oversight at the national and EU levels. Any AI application involving personal data must comply with the fundamental principles of KVKK and GDPR (lawfulness, purpose limitation, data minimization, etc.). In the context of KVKK, the purpose and legal basis for processing personal data must be explicit; in the KURGAN example, a public service duty such as tax auditing legitimizes this under the exceptions provided in the ECHR. However, KVKK still requires attention to impact assessment, transparency, and the data subject’s right to information. Similar rules apply on the GDPR side; for example, the EDPB has stated that data used to develop AI models should be anonymized in a way that makes it very difficult to identify the person.
In Turkish procedural law (CMK, HMK), the use of AI-based evidence is a new area that currently lacks specific regulation. However, as a fundamental principle, an AI system only provides preliminary information for investigation/examination processes; evidence that can constitute a basis for accusation or judicial rulings is obtained through human-controlled determinations and legal procedures. Under the Criminal Procedure Code (CMK), the right to a fair trial grants parties the right to challenge evidence, and erroneous AI reports can be remedied via this objection process. Similarly, under the Code of Civil Procedure (HMK), judgments cannot be based on information not obtained in accordance with procedure. On the other hand, tax laws (VUK) include provisions that grant taxpayers the right to be requested information before an audit (e.g., VUK 160/A, 5); the letters sent based on KURGAN results are issued on the basis of this legal authorization. In summary, while KURGAN’s access to data arises from law, the ultimate determinant in audit and penal processes is the human auditor. This approach ensures compliance both with KVKK/GDPR privacy rights and with the legal rights of defense and objection under CMK/HMK.
Corporate Risk Management and Conclusion
The reliability and acceptance of AI systems can be ensured through robust corporate governance mechanisms. As the OECD also emphasizes, public institutions should establish policy and oversight frameworks for transparency and accountability in AI. Standards such as ISO 42001 provide risk management and impact assessment processes to meet this need. Through certification and internal audits, AI projects are periodically reviewed and potential errors are detected early. Thus, the question that CIOs most worry about—“Is AI reliable; will it make wrong decisions?”—is largely answered through predictable risk management. Ultimately, when correctly designed, AI-supported auditing tools like the KURGAN system offer institutions a new assurance norm: complex events such as fraud are detected early, revenue losses are prevented, and the clarification of facts becomes easier. In this context, alignment with ISO 42001 and ISO 27001 and full compliance with European/local legislation provide institutional trust and legal assurance to AI applications. Consequently, by explicitly stating that systems like KURGAN are “advisory, not binding,” human-initiative-driven processes are preserved in AI-focused audits; this both reduces corporate risk and inspires confidence among all stakeholders.
Referances
Genaro-Moya, T., et al. 2025. “Artificial Intelligence and Public Sector Auditing: Challenges and Opportunities for Supreme Audit Institutions.” World 6, no. 2: 78. https://doi.org/10.3390/world6020078.
Kişisel Verileri Koruma Kurumu (KVKK). 2021. Yapay Zeka Alanında Kişisel Verilerin Korunmasına İlişkin Tavsiyeler. 17 September 2021.
Nemko Digital. 2025. “ISO 42001 AI Cybersecurity: Complete Implementation Guide.” Nemko, 7 August 2025.
OECD. 2025. Governing with Artificial Intelligence: The State of Play and Way Forward in Core Government Functions. Paris: OECD Publishing.
OECD.AI. 2024. “ISO/IEC 42001:2023 – Information Technology — Artificial Intelligence — Management System.” OECD.AI, 2 July 2024.
Republic of Turkey Official Gazette (Resmi Gazete). 2016. Law on the Protection of Personal Data (No. 6698 – KVKK). 25.03.2016.
Republic of Turkey Official Gazette (Resmi Gazete). 2016. Regulation (EU) 2016/679 (GDPR). 27.04.2016.
Republic of Turkey Official Gazette (Resmi Gazete). 2004. Criminal Procedure Code (No. 5271 – CMK). 08.12.2004.
Republic of Turkey Official Gazette (Resmi Gazete). 2011. Code of Civil Procedure (No. 6100 – HMK). 04.10.2011.
Republic of Turkey, Ministry of Treasury and Finance. 2025. Strategy for Combating Fake Documents and KURGAN (Kuruluş Gözetimli Analiz Sistemi) Guide. 1 October 2025.
European Data Protection Board (EDPB). 2024. “Opinion 1/2024 on the Use of Personal Data for the Development and Deployment of AI Models.” 18 December 2024.
White & Case LLP. 2024. “Long Awaited EU AI Act Becomes Law after Publication in the EU’s Official Journal.” 16 July 2024. (www.whitecase.com).
Yayla, H., and K. Konukçu. 2021. Yapay Zeka Alanında Kişisel Verilerin Korunmasına İlişkin Tavsiyeler. Yayla & Konukçu Law Office, 17 September 2021.
_edited.png)



Comments