Protecting Model Updates in Privacy-Preserving Federated Learning: Part Two
The problem The previous post in our series discussed techniques for providing input privacy in PPFL systems where data is horizontally partitioned. This blog will focus on techniques for providing input privacy when data is vertically partitioned . As described in our third post , vertical partitioning is where the training data is divided across parties such that each party holds different columns of the data. In contrast to horizontally partitioned data, training a model on vertically partitioned data is more challenging as it is generally not possible to train separate models on different
---
[Image: Mark Durkee ]
#### Mark Durkee
Mark Durkee is Head of Data & Technology at the Centre for Data Ethics and Innovation (CDEI). He leads a portfolio of work including the CDEI's work on the Algorithmic Transparency Recording Standard, privacy enhancing technologies, and a broader programme of work focused on promoting responsible access to data. He previously led CDEI's independent review into bias in algorithmic decision-making. He has spent over a decade working in a variety of technology strategy, architecture and cyber security roles within the UK government, and previously worked as a software engineer and completed a PhD in Theoretical Physics.
---
[Original source](https://www.nist.gov/blogs/cybersecurity-insights/protecting-model-updates-privacy-preserving-federated-learning-part-two)