Agile AI Engineering in AISG

At AI Singapore, we are constantly running several 100 Experiments (100E) projects. To ensure smooth and successful delivery, it is essential to have some form of a structured methodology that help guide the team. For this, we adopt well known principles of Lean and Agile development where common frameworks used include Scrum, Kanban and others.

While such practices are prevalent among mature development teams, one of the key challenges we face is that our apprentices come from diverse backgrounds and tend not to have any prior experience in software or AI development. They likely have not worked in a team-based setting which requires close communication. Therefore, easing them into such an environment and helping them to understand the benefits are part of an ongoing process which we are practicing and adapting as well.

Scrum is a popular framework for implementing Agile practices and as this was the best understood concept by the founding team, adopting it to kickstart the 100E projects was the natural thing to do. However, after experimenting with it for a while, we found that the rigidity of a fixed planning cycle was not a particularly good fit for AI development where current tasks can very often uncover new information that affect previously planned tasks. To work around this, some teams started switching to a shorter weekly planning cycle instead of the usual 2 or 3 weeks cycle. The change proved useful and resulted in less churn of planned tasks as we look nearer to the future and focus on answering current questions. 

Typical works involved in AI development can be grouped and roughly visualised in the following diagram.

At any point in time, the engineers and apprentices may be working on several of these. EDA (Exploratory Data Analysis) is where the bulk of the uncertainty lies. These tend to revolve around the quality of the data at hand and often surface during the initial phases of the project when it is not well understood yet. To a lesser extent, a fair amount of uncertainty also surfaces during the modelling and model debugging phase. Frequently, there is a need for further research when the initial approach does not yield good results or has simply been stretched to its full potential. That aside, more typical engineering related work such as building out the data pipeline, creating the scaffolding required to run our modelling experiments as well as building out the CI/CD pipeline are much more certain and amenable to how tasks can be planned out as per usual software engineering projects. In many instances, planning out the tasks to be done revolves around balancing around these 2 key areas.

To enable fast iteration of model building and experimentations, teams are encouraged to quickly build out their AI Pipelines so that they can easily ingest the required data and test out different modelling approaches. To achieve this, we rely on tools such as a good CI/CD platform (eg. Gitlab) as well as an experiment management platform in the form of Polyaxon. A reliable CI/CD pipeline enables us to push changes out for deployment as and when required with the safe knowledge that the built-in sanity checks and tests will help catch any code issues early. On the model development side, Polyaxon allows us to run our experiments which include model training, hyperparameter tuning and evaluation. Metrics tracking is another helpful but basic feature that Polyaxon provides.

Another common challenge that we face in AI Development is data versioning. This is still an evolving area and we are starting to experiment with tools such as DVC and Kedro. Due to the variation in the types of data that are used in our projects, using the data versioning tool that best suits the data for that project is what we recommend as each of them has its own strengths and weaknesses.

Compared to data versioning, model versioning is a much easier proposition as existing practices for versioning software artifacts are usually adequate. In this case, having a good CI/CD pipeline with a proper artifact publishing mechanism that tags the artifacts appropriately are usually sufficient for the project needs.

Coming up next, we aim to introduce a more test-driven approach to building up our AI pipelines. The challenge is to balance between experimental code that lives in notebooks vs production ready pipelining code. In order to avoid unnecessary frequent code reworks, the core part of the pipeline code should be relatively stable which will then allows us to write more meaningful tests that correctly assert its functionality. As we start to explore this space, frameworks such as Kedro are helpful here as they enable us to write our transformation functions in isolation and chain them together to form the desired pipeline.

Finally, going the full Kanban style where we drop fixed planning sessions in favour of more ad hoc design sessions is also something on the card. As we embark on a brand new year, I’m looking forward to continually improve on our engineering practices. If all this sounds exciting to you, drop by and have a chat with us or leave a comment. We are always on the lookout for talents to help us build the AI ecosystem and bring AI development in Singapore to a higher level.

Author