Machine Learning as a Service (MLaaS) is a popular and convenient way to access a trained machine learning (ML) model trough an API. However, if the user's input is sensitive, sending it to the server is not an option. Equally, the service provider does not want to share the model by sending it to the client for protecting its intellectual property and pay-per-query business model. As a solution, we propose MLCapsule, a guarded offline deployment of MLaaS. MLCapsule executes the machine learning model locally on the user's client and therefore the data never leaves the client. Meanwhile, we show that MLCapsule is able to offer the service provider the same level of control and security of its model as the commonly used server-side execution. Beyond protecting against direct model access, we demonstrate that MLCapsule allows for implementing defenses against advanced attacks on machine learning models such as model stealing, reverse engineering and membership inference.
2021 IEEE CVPR Workshop on Fair, Data Efficient and Trusted Computer Vision