Voices from the AISG Federated Learning Lab
The Federated Learning Lab is a major initiative within AI Singapore which seeks to bring to industry the benefits of collaborative machine learning while respecting data privacy. Huang Jie and Han Xiang (Han) are two AI apprentices from Batch 5 of the AI Apprenticeship Programme who shared with me their contributions to this effort and what they learned along the way.
Below is a transcript of the conversation [*].
Basil : Hi Huang Jie and Han. Great to have you guys here today.
Huang Jie : Thank you, glad to be here.
Han : Yes, thank you for having us.
Basil : Okay, before we begin, perhaps I would like to invite you guys to introduce yourselves to our listeners. Ladies first, let’s start with Huang Jie.
Huang Jie : For me, after obtaining my PhD in molecular dynamics, I had been working as a software developer for around 3 years. I got interested in machine learning last year and took online courses to learn more about it. Then I found the AIAP programme in AISG, thinking this was the best chance for me to get into this industry.
Han : So, for me, I got my bachelor’s in computer science in 2019. Unfortunately there weren’t any courses on machine learning or AI theory at my school, so I studied it on my own. I read papers, did courses, personal projects. AI Singapore is the first machine learning related job that I’ve done. It’s been an irreplaceable learning experience and I’m glad to have had it.
Basil : So, today we’re going to talk about federated learning. I first got to know about federated learning last year and I did some reading up on it. So, I’m not a total stranger to it. But for the benefit of our listeners, could you explain what federated learning is?
Han : Okay, yes, in super simple terms. Normal machine learning typically requires that you have all of your data together in one bin, one container. Now, imagine a scenario where you have a bunch of containers. Each container owned by a different person, and none of those people really trust each other. Let’s say the owner of one container is a big dog and a bin owned by another person is filled with beef. You can’t let the dog touched the beef or it will eat it, and this would be bad. And let’s say another bin contains secrets pertaining to the private lives of a bunch of people. You can’t let anyone see those secrets or you might have a bunch of lawsuits dumped on you and the government might decide to slap you with some limiting regulations as punishments. Anyway, you now need to do machine learning on all of these separate containers that you can’t see and you definitely can’t reveal the contents and you can’t let the owners of bins mess with any bins that are not theirs. So, it’s pretty challenging, yeah.
Huang Jie : Maybe I can give an example of an actual use case. Imagine you are a hospital owner, and you want to predict how likely it is that a patient will perish. You do not have many fatalities recorded. Maybe your hospital is a good one, or you just have very few electronic records. Predicting mortality is a hard job that is probably going to need lots of data. So your only option is to collaborate with a bunch of other hospitals. Of course, well-run hospitals are not going to reveal the details of their patient data to just anyone. The data has to be kept secret and confidential – it’s the sort of situation federate learning is intended for.
Basil : Interesting. And what are some other industries or applications where federated learning can also add value?
Huang Jie : Yes, actually, there are two types of industries where federated learning is particularly useful. First one would be finance. Machine learning in finance is a little bit infamous because financial data is typically a great simplification of extremely complex real world phenomena. For example, how would you go about predicting the stock market rising or falling, based on Donald Trump catching the coronavirus or tweeting something? How would you account for back-room dealings, pump and dump schemes, or the discovery of new resources in some area? The simplest solution is just to throw lots of data at it, as much as you can, and you can get extremely large amounts of data if you make use of what multiple financial institutions have collected. But because this is financial data, this has to be kept secret for reasons of security and competitive advantage. So, federated learning would allow financial organisations to collaborate with their competitors without revealing their data to any of them.
Han : The second industry would be IoT, the Internet of things. The issue with IoT is that it typically makes use of information collected from appliances and tools, which are a very fundamental part of people’s daily lives, and which are constantly collecting data. So, things like temperature readings of someone’s house, the amount of food in a fridge, the types of things they watch on TV, the topics of conversations they had with family members etc. Now, if someone had access to such data they could create an extremely accurate profile of lifestyle, psychology, ideology of the people associated with the data. So in a benign way, this could be used for lifestyle optimisation. You know, you are running out of a certain type of food so the fridge will automatically order the food … the house is going to adjust the temperature so it’s ideal for what you typically like at a certain point of the day, that sort of nice things. Maliciously, this could be used to determine the perfect time to, say, rob your house and kidnap your child. More seriously, it could be used to profile people according to some measures of social desirability. So let’s just say that it’s data that extremist groups with dreams of radically reshaping society would love to have. It is basically perfect and ubiquitous surveillance. Federated learning would allow IoT companies to collect and make use of this data, but it would reduce the possibility of that data being used for such nefarious purposes.
Basil : Yes, on a related note, I also recall a point made by technology thought leader Kai-Fu Lee in his 2018 book “AI Superpowers”. He mentioned three points in successful AI implementation : big data, computing power and AI engineering talent. Now, once computing power and engineering talent reach a certain threshold, the quantity of data becomes decisive in determining the overall power and accuracy of an algorithm. I think this is a very pertinent point for Singapore where our size limits the amount of data we have at our disposal to work with. So, now, translating theory into practice, moving from the concept to application, what are the technical challenges encountered when implementing a federated learning solution?
Han : Well, I think the main challenge in federated learning comes from what we call statistical and system heterogeneity. Statistical heterogeneity is a natural characteristic of data which comes from many different sources. Let’s take the mortality prediction task we had previously. We have two hospitals : one is in North America, the other is in Uganda. The data from both hospitals deals with the same task which is mortality prediction. But, it reflects completely different sets of socioeconomic circumstances and is influenced by many unique factors. They don’t really represent the same real world phenomena and machine learning models trained on each of them can end up approximating functions that look and behave very dramatically different from each other. The second challenge I mentioned was system heterogeneity. You have a bunch of participants. Each participant has a different computer. Some of those computers are better, some are worse. Some participants will complete their tasks much faster than the others and then we’ll have to wait. Some participants’ computers will crash, disconnect or jam and leave everyone else waiting with no idea what’s going on with them. And so, it’s a pretty difficult problem of coordination and it’s still unsolved for federated learning. Like, how do you account for all of these issues of timing?
Huang Jie : On top of that, although federated learning has the benefit of allowing parties to collectively train a model without exposing their data, this is a double-sided sword. Since no data is exposed, in some mild setting, parties might just take advantage of other parties’ better data in terms of the model gradients while contributing lousy ones obtained by their own useless data. Things can become severe, some party might even perform data poisoning by contributing model gradients that are specifically and carefully manipulated to achieve their malicious goals. So, there is a need to evaluate how much each parties’ data contributes to the final model and also how to distribute the reward fairly among all the parties.
Basil : Okay, now coming to AI Singapore. Could you tell us more about what AI Singapore is working on in federated learning?
Huang Jie : The federated learning team in AISG aims to build a federated learning platform that can be available to all the organisations in Singapore. Participants will be able to benefit from a better “model” obtained by collaboratively utilising data from all the parties, without sacrificing the privacy of their own data. In addition, each party can be rewarded by their corresponding contribution. The project started from our Batch 4 apprentices. They were the ones who made this happen. They have built a platform which has been internally released to AI Singapore engineers and partners to gather feedback. In addition, they also performed use case studies to demonstrate the applicability and versatility of the platform. So, Han and I are Batch 5. Our focus is mainly on how to properly incentivise the participants through accurate and fair contribution calculations.
Han : If I could add a little bit more about that, our goal was to find some sort of general solution to the contribution calculation problem and a general solution doesn’t really exist… Well, there is a general solution, but it is pretty impractical and certainly not suitable for our use case. We started off by trying a whole bunch of new algorithms. We thought that we would try to calculate contribution in a certain special way and then used that method of calculating contribution as the basis for a new federated learning algorithm. So, we came up with a whole bunch of variations which we called FedMean, FedMomentum, FedDemocracy, FedDictatorships … all sorts of new things, none of which really worked well. Well, that’s not true – all of them worked well in certain situations, but failed in others…
Basil : Not universally, you mean?…
Han : Exactly. Not general solutions. It really wasn’t great. It was very inconsistent. We eventually realized that the problem is, with a standard federated learning algorithm, the contribution calculation is very hard because each party that trains the model is affected by information that comes from other parties, so at some point their individual contributions become tangled up. For example, let’s say you have two parties and they’ve been training a model for a certain amount of time and Party #1 has received a model from Party #2 in the past. Now, it’s going to build its future model based on what Party #2 gave it. However, if the two parties keep exchanging information in this way, the information that they contribute becomes very intertwined, impossible to separate. You can’t just cut cleanly the model in half and say this part was contributed by Party #1 and this part was contributed by Party #2. They all interact in a very unpredictable way. So, how do we solve this fundamental challenge? We came up with an algorithm that we called FedSwarm. FedSwarm is an ensemble learning method, meaning that it basically relies on training a large number of very simple, small machine learning models and then using them all at the same time. Basically treating it kind of like a democracy in that each model votes for what is the correct answer, and then we can take the majority votes, or we can weigh the votes in some manner, but either way the models are collaboratively deciding what the right answer is. This is as opposed to standard machine learning, where one model makes a decision and you just follow it. Skipping over all the minute technical details, we found that this method got us superior performance compared to the standard federated learning algorithms, and it also meant that we could easily get the contribution of each party in federated learning because each party is training models that never interact with anyone else’s models, the information never gets intertwined. So, you can just separate and say, oh, these models trained by this party get this number of answers correct and these models get another certain segment of answers correct – you can easily tell that they are responsible for different beneficial contributions.
Basil : Sounds like a lot of interesting work that you guys have done. I’m particularly intrigued by the attention paid to the contributions of individual parties, because you can have the most mathematically brilliant model training solution, but things will not work simply according to plan if you do not consider the so called human factor. Let’s talk about the apprenticeship in general. How has it been for you guys?
Han : I think it’s been very interesting. For a very long time, I had honestly had very few people to talk to who wanted to do AI and machine learning. Most of my classmates were interested in software and network development, but not really in machine learning. So, I eventually decided that those things are, interesting and important as they are, they’re not really what I want to focus my career on. So getting into AI Singapore meant that I suddenly got to talk and interact with a lot of people who had the same goals and were interested in the same topics. So, the experience has been pretty wonderful. I’m quite glad to have the opportunity be an apprentice. It really expanded my worldview and I’m quite grateful for that.
Huang Jie : I agree with what Han has said. On top of that, the apprenticeship actually feels quite stressful for me. But all the apprentices are talented and they’re hardworking, so working with them together inspires me a lot. Since we all come from different backgrounds, so having the chance to discuss with all the people here, even just for general non-machine learning topics is quite inspiring to me.
Basil : Good to hear that. Are there any particular learning points that you guys would like to mention, especially to listeners considering applying for the programme?
Han : Well, I think that the breadth and scope of the 100 Experiments projects at AI Singapore is pretty inspiring. I think that the real value of machine learning comes out when it’s applied to solve practical real world problems and the 100E projects really focus on a lot of problem areas that a lot of people might just not think about or not consider but which are necessary. So, for my case, I never heard about federated learning. I never knew it was a thing. But I got the chance to work on it in depth, to really explore the theory and the technical side of it in great detail and experiment with it. And Huang Jie and I were able to write a paper on FedSwarm … how do I say? I’m not really sure how to summarise this, but I think we both definitely got a lot more out the programme than we were expecting.
Huang Jie : Since Han has mentioned about the technical side, maybe I can mention more about the other sides, which are non technical. I have learned a lot from our mentor, Jianshu, who is the lead in the Federated Learning Lab. Also my teammate, Han. So they inspire me by being original and attending to details, and Han is very excited about trying out new ideas and Jianshu is very careful about the theoretical background of the approach that we are taking. Now, AI – which is quite an overloaded term – is a rapidly advancing field which requires whoever is in it to keep up with the research frontier, but also at the same time focus on the application of their research advancement. So, if I would summarise, I would say just like what the Red Queen said, “It takes all the running you can do to keep in the same place. But if you want to go somewhere else, you must run at least twice as fast as possible.”
Basil : I hope the experience will serve you guys well in the future. Thanks for being here today. Here’s wishing you guys all the best in the rest of the programme and beyond.
Han : Thank you very much.
Huang Jie : Thank you.
The Federated Learning Series
- AI Singapore’s Journey Into the World of Federated Learning
- Voices from the AISG Federated Learning Lab (this article)
- A Peek into Synergos – AI Singapore’s Federated Learning System
[*] This conversation was transcribed using Speech Lab. The transcript has been edited for length and clarity.