HUNGARY: Using open-source software and Agile/Lean for data-first auditing

12.06.2024

In May 2023 the Hungarian SAI has established an in-house software development team, with the mandate to build an internal application to access unified auditee data, and to provide tools for the increasingly data-centric audits. In addition, many internal mini-applications of the SAI (intranet, website, internal policy database, audit status tracking, task management, timeslot booking, etc) belong to this internal dev team.

At the 1-year anniversary of this effort, we'd like to share some "lessons learned" with our esteemed colleagues at the ITWG, hoping that some of it might prove useful.

First, what worked well for us:

Agile/Lean approach to software development.

Agile/Lean is a software development approach that encourages collaboration and allows requirements to evolve as a program progresses. Agile/Lean emphasizes iterative delivery; that is, the development of software in short, incremental stages, where customers continuously provide feedback on the software's functionality and quality. It is well suited for programs where the end goal is known, but specific details about their implementation may be refined along the way.

While not a silver bullet, Agile has been proven by multiple research efforts to improve the chances of successful IT project resolution by about 20-30% compared to traditional "waterfall" methodologies. We're happy to take that advantage!

successrates Hungary pic1

Source: The Standish Group International Inc, Chaos Report 2015, based on 10.000+ software projects.

We were worried at first regarding how successful Agile can be in a government organization more used to gargantuan waterfall-based procurement contracts, but most of our internal clients understood the iterative process eventually, and after some initial successes (the team launched the first version of the main software after 6 months, and later replaced five internal applications in 3 months for various departments), the approach is now widely accepted in the SAI.

Our particular software development process is based on:

  • User Story Mapping as the method of user requirements solicitation,
  • Kanban to manage the backlog and prioritize work,
  • DevOps approach to deployment and infrastructure,
  • Continuous Integration and automated testing to provide assurances for our "release often" approach,
  • Quarterly OKRs ("Objectives and Key Results") as a means to define and measure our performance.

We are also using the DORA metrics to monitor and compare our performance against other organizations in various industries. In 2024 Q2, we are at 7.9 (of 10) on the DORA quick check. We hope to go above 8.0, in time.

Using open-source, royalty-free software paid off handsomely in flexibility and cost-effectiveness.

Some open technologies that lie at the foundations of our system:

  • PostgreSQL database, which is our workhorse: we use it as the "default approach" to every new problem, possibly replacing later with more specialized technologies when/if necessary.
  • For example, we are currently evaluating Clickhouse as an analytics database for some really big datasets.
  • Python Jupyter notebooks (server-side JupyterLab) in conjunction with DuckDB for fast, early ingestion and exploration of ad-hoc datasets. (You can literally SQL-join CSV files with a PostgreSQL table, do DataFrames calculations on it, and upload the results into a BI tool for coworkers, in just a few steps!)
  • The Laravel and FilamentPHP frameworks proved to be incredibly time-efficient technologies on which to build our heavily form-based internal applications.
  • Gitlab as our on-premise source control / task management / continuous integration server.
  • Ansible as our "Infrastructure-as-Code" DevOps tool, which allows us to deploy often, recover quickly in case of a problem, and to scale our applications as needed.
  • GrafanaPrometheus and Loki as our dashboard, logging and software metrics ("observability") infrastructure.

jupyter Hungary pic2

Accountability and auditability of Agile/Lean software development.

Interestingly, doing Agile software development internally also opened up some very lively discussions with our fellow auditors about how to measure and make auditable an Agile development process, inside and outside of the SAI.

In a "waterfall" development, auditing is fairly easy: there is a specification, and the deliverables either conform to that specification or not. Mind you, there is zero proof that what was specified is what was needed -- and this is how waterfall-based projects often end up with software that barely covers the business needs (see chart above...), while the majority of the features go un-used and just constitute technical burden, slowing down later development.

In Agile/Lean development, there is an end goal still, a business problem to solve, even if the precise requirements can (and will!) change over the course of the program. So we decided that the measurement of success should be:

  • the software's fit for the purpose,
  • the clients' satisfaction, and
  • the absence of unnecessary features, which we consider waste of time and money.

In the end, we decided to do the following to make ourselves accountable and auditable:

  • We are defining quarterly OKRs as our "lighthouse".
  • We are introducing quarterly user satisfaction surveys, now that we are in our second year of development and have some visible products "out there".
  • Recently we started to log feature usage, which will allow us to identify the right functionalities to focus our efforts on.

What we would do differently, knowing what we know today.

We should have spent more time and effort on educating people early about the Agile/Lean approach, and its merits and differences compared to old waterfall approaches.

For this end, we should have built/trained the internal Business Analyst layer earlier in the process. We should have started this effort as soon as the dev team was created, and not some nine-ten months later.

Since we have a lots of requests from various departments, the ones we have no capacity to work with at times feel "left behind", while that hiatus would have been perfect time for a more in-depth requirements solicitation phase, resulting in better client relations, and the better understood requirements could possibly have led to even better "minimum viable products".

For the departments that we did work with, lots of "what do you mean, that users should start to use this before it's 100% feature-complete and completely refined?" questions needed to be answered and expectations needed to be managed. Only after our internal clients actually got several new releases with improvements after the initial "go-live", they started to feel confident that this is not just a trick to make them accept half-baked software, and is ultimately the way to more useful functionality.

Ultimately this "release early, release often" approach, in which we believe that an imperfect software that can already be used has more value than a supposedly perfect one that you can't use yet, combined with continuous learning of how users actually use the software proved to be effective in a SAI internal context.

Thank you, dear reader, for your interest in our work!

On a closing note, we are always looking to hear about and learn from fellow SAIs' approaches and lessons learned on the IT front (good and bad experiences are both valuable), and if there's an opportunity where our experience might be of any help to our fellow ITWG colleagues, we are happy to talk!

eszr screenshot Hungary pic3

SIDEBOX: Our internal dev team's 2024 Q2 OKRs

Objective: We want to provide value as soon as possible. Key Results:

  • Deploy completed features within 24 hours.
  • Do our first quarterly user satisfaction survey.
  • Review-environment deploy time is under 15 mins, and dev auto tests run under 5 mins.

Objective: We want a secure system. Key Results:

  • Zero data breach.
  • Pass National Cybersecurity Center (NKI) audit for the website and the main software without any critical issues.
  • Do one backup/recovery test on all important systems each.

Objective: We want a stable system. Key Results:

  • Have automated tests for each application/module in the main software.
  • Recovery/rebuild time under 1 hour.
  • Main software, intranet and website uptime is over 99.9%.

(P.s.: We do feel that our OKRs at the moment are too technical and not business-oriented yet. Changing this is an ongoing effort that needs more internal discussions in the near future. It's one area where we believe that having something, even if imperfect, is better than the perfect thing we don't have.)