Many machine learning models are black boxes in the sense that it is not transparent, even to the designers, how they make their classifications. Nonetheless these models can be extremely reliable.

In the philosophy of knowledge, reliabilists claim that the reliability of a process is sufficient to justify beliefs that result from that process. Certain theorists have thus argued that the outputs of black boxes can be justified as long as they are reliable. This project aims to investigate whether the use of black boxes in government decision making would be legal in light of such reliabilist arguments. The aimed output of the project is chapter as part of an edited book collection on automation in public governance – edited by A/Prof Yee Fui Ng and Prof Matthew Groves and currently on contract with Hart Publishing.

Project lead: