Ir para o conteúdo principal

Text I

Understanding bias in facial recognition technologies

Over the past couple of years, the growing debate around automated facial recognition has reached a boiling point. As developers have continued to swiftly expand the scope of these kinds of technologies into an almost unbounded range of applications, an increasingly strident chorus of critical voices has sounded concerns about the injurious effects of the proliferation of such systems on impacted individuals and communities. Critics argue that the irresponsible design and use of facial detection and recognition technologies (FDRTs) threaten to violate civil liberties, infringe on basic human rights and further entrench structural racism and systemic marginalisation. In addition, they argue that the gradual creep of face surveillance infrastructures into every domain of lived experience may eventually eradicate the modern democratic forms of life that have long provided cherished means to individual flourishing, social solidarity and human self-creation. 

Defenders, by contrast, emphasise the gains in public safety, security and efficiency that digitally streamlined capacities for facial identification, identity verification and trait characterisation may bring. These proponents point to potential real-world benefits like the added security of facial recognition enhanced border control, the increased efficacy of missing children or criminal suspect searches that are driven by the application of brute force facial analysis to largescale databases and the many added conveniences of facial verification in the business of everyday life. 

Whatever side of the debate on which one lands, it would appear that FDRTs are here to stay. 

Adapted from: understanding_bias_in_facial_recognition_technology.pdf

In the first sentence, when the author says that the debate “has reached a boiling point”, he means that the debate is

© Aprova Concursos - Al. Dr. Carlos de Carvalho, 1482 - Curitiba, PR - 0800 727 6282