In the era of artificial intelligence (AI) and big data, predictive models have become vital across various sectors, including healthcare, finance, and genomics. These models heavily rely on processing sensitive information, making data privacy a crucial concern. The challenge lies in maximising data utility while ensuring the confidentiality and integrity of the information. Striking this balance is essential for the advancement and acceptance of AI technologies.
Collaboration and open-source
Creating a robust dataset for training machine learning models is not simple. For instance, while AI technologies like ChatGPT have thrived by gathering vast amounts of data from the Internet, healthcare data cannot be compiled freely due to privacy concerns. Constructing a healthcare dataset involves integrating data from multiple sources, including doctors, hospitals, and across borders. This complexity underscores the need for robust privacy solutions.
Although the healthcare sector is emphasised for its societal importance, these principles apply broadly. For example, even a smartphone’s autocorrect feature, which personalises predictions based on user data, must navigate similar privacy issues. The finance sector also needs help sharing data due to its competitive nature.
Thus, collaboration is not just beneficial but crucial for safely harnessing AI’s potential within our societies. An often overlooked aspect is the actual execution environment of AI and the underlying hardware that powers it. Today’s advanced AI models require robust hardware, including extensive CPU/GPU resources, substantial amounts of RAM, and more specialised technologies such as TPUs, ASICs, and FPGAs. Conversely, the trend towards user-friendly interfaces with straightforward APIs is gaining popularity. This scenario highlights the importance of developing solutions that enable AI to operate on third-party platforms without sacrificing privacy, and the need for open-source tools that facilitate these privacy-preserving technologies. Your contribution to this collaborative effort is invaluable.
Privacy solutions to train machine learning models
Several sophisticated solutions have been developed to address the privacy challenges in AI, each focusing on specific needs and scenarios.
Federated Learning (FL) allows for training machine learning models across multiple decentralised devices or servers, each holding local data samples, without actually exchanging the data. Similarly, Secure Multi-party Computation (MPC) enables various parties to jointly compute a function over their inputs while keeping those inputs private, ensuring that sensitive data does not leave its original environment.
Another set of solutions focuses on manipulating data to maintain privacy while allowing for helpful analysis. Differential privacy (DP) introduces noise to data in a way that protects individual identities while still providing accurate aggregate information. Data anonymization (DA) removes personally identifiable information from datasets, ensuring anonymity and mitigating the risk of data breaches.
Finally, homomorphic encryption (HE) allows operations to be performed directly on encrypted data, generating an encrypted result that, when decrypted, matches the result of operations performed on the plaintext.
The perfect fit
Each of these privacy solutions has its own set of advantages and trade-offs. FL maintains communication with a third-party server, which can potentially lead to some data leakage. MPC operates on cryptographic principles that are robust in theory but can create significant bandwidth demands in practice.
DP involves a manual setup where noise is strategically added to the data. This setup limits the types of operations that can be performed on the data, as the noise needs to be carefully balanced to protect privacy while retaining data utility. DA, while widely used, often provides the least privacy protection. Since anonymization typically occurs on a third-party server, there is a risk that cross-referencing can expose the hidden entities within the dataset.
HE, specifically Fully Homomorphic Encryption (FHE), stands out by allowing computations on encrypted data that closely mimic those performed on plaintext. This capability makes FHE highly compatible with existing systems and straightforward to implement thanks to open-source and accessible libraries and compilers like Concrete ML, which are designed to give developers easy-to-use tools to develop different applications. The major drawback is the slowdown in computation speed, which can impact performance.
While all the solutions and technologies discussed encourage collaboration and joint efforts, FHE’s potential to revolutionise data privacy is truly inspiring. It can drive innovation and facilitate a scenario where no trade-offs are needed to enjoy services and products without compromising personal data.