READ TEXT I AND ANSWER QUESTIONS 11 TO 15
TEXT I
Will computers ever truly understand what we're saying?
Date: January 11, 2016
Source University of California - Berkeley
Summary:
If you think computers are quickly approaching true human
communication, think again. Computers like Siri often get
confused because they judge meaning by looking at a word's
statistical regularity. This is unlike humans, for whom context is
more important than the word or signal, according to a
researcher who invented a communication game allowing only
nonverbal cues, and used it to pinpoint regions of the brain where
mutual understanding takes place.
From Apple's Siri to Honda's robot Asimo, machines seem to be
getting better and better at communicating with humans. But
some neuroscientists caution that today's computers will never
truly understand what we're saying because they do not take into
account the context of a conversation the way people do.
Specifically, say University of California, Berkeley, postdoctoral
fellow Arjen Stolk and his Dutch colleagues, machines don't
develop a shared understanding of the people, place and
situation - often including a long social history - that is key to
human communication. Without such common ground, a
computer cannot help but be confused.
“People tend to think of communication as an exchange of
linguistic signs or gestures, forgetting that much of
communication is about the social context, about who you are
communicating with," Stolk said.
The word “bank," for example, would be interpreted one way if
you're holding a credit card but a different way if you're holding a
fishing pole. Without context, making a “V" with two fingers
could mean victory, the number two, or “these are the two
fingers I broke."
“All these subtleties are quite crucial to understanding one
another," Stolk said, perhaps more so than the words and signals
that computers and many neuroscientists focus on as the key to
communication. “In fact, we can understand one another without
language, without words and signs that already have a shared
meaning."
(Adapted from http://www.sciencedaily.com/releases/2016/01/1
60111135231.htm)
How facial recognition technology aids police

Police officers’ ability to recognize and locate individuals with a history of committing crime is vital to their work. In fact, it is so important that officers believe possessing it is fundamental to the craft of effective street policing, crime prevention and investigation. However, with the total police workforce falling by almost 20 percent since 2010 and recorded crime rising, police forces are turning to new technological solutions to help enhance their capability and capacity to monitor and track individuals about whom they have concerns.
One such technology is Automated Facial Recognition (known as AFR). This works by analyzing key facial features, generating a mathematical representation of them, and then comparing them against known faces in a database, to determine possible matches. While a number of UK and international police forces have been enthusiastically exploring the potential of AFR, some groups have spoken about its legal and ethical status. They are concerned that the technology significantly extends the reach and depth of surveillance by the state.
Until now, however, there has been no robust evidence about what AFR systems can and cannot deliver for policing. Although AFR has become increasingly familiar to the public through its use at airports to help manage passport checks, the environment in such settings is quite controlled. Applying similar procedures to street policing is far more complex. Individuals on the street will be moving and may not look directly towards the camera. Levels of lighting change, too, and the system will have to cope with the vagaries of the British weather.
[…]
As with all innovative policing technologies there are important legal and ethical concerns and issues that still need to be considered. But in order for these to be meaningfully debated and assessed by citizens, regulators and law-makers, we need a detailed understanding of precisely what the technology can realistically accomplish. Sound evidence, rather than references to science fiction technology --- as seen in films such as Minority Report --- is essential.
With this in mind, one of our conclusions is that in terms of describing how AFR is being applied in policing currently, it is more accurate to think of it as “assisted facial recognition,” as opposed to a fully automated system. Unlike border control functions -- where the facial recognition is more of an automated system -- when supporting street policing, the algorithm is not deciding whether there is a match between a person and what is stored in the database. Rather, the system makes suggestions to a police operator about possible similarities. It is then down to the operator to confirm or refute them.
By Bethan Davies, Andrew Dawson, Martin Innes (Source: https://gcn.com/articles/2018/11/30/facial-recognitionpolicing.aspx, accessed May 30th, 2020)
Based on the information provided by Text, mark the statements below as true (T) or false (F).
( ) In relation to AFR, ethical and legal implications are being brought up.
( ) There is enough data to prove that AFR is efficient in street policing.
( ) AFR performance may be affected by changes in light and motion.
The statements are, respectively,
READ TEXT I AND ANSWER QUESTIONS 11 TO 15
TEXT I
Will computers ever truly understand what we're saying?
Date: January 11, 2016
Source University of California - Berkeley
Summary:
If you think computers are quickly approaching true human
communication, think again. Computers like Siri often get
confused because they judge meaning by looking at a word's
statistical regularity. This is unlike humans, for whom context is
more important than the word or signal, according to a
researcher who invented a communication game allowing only
nonverbal cues, and used it to pinpoint regions of the brain where
mutual understanding takes place.
From Apple's Siri to Honda's robot Asimo, machines seem to be
getting better and better at communicating with humans. But
some neuroscientists caution that today's computers will never
truly understand what we're saying because they do not take into
account the context of a conversation the way people do.
Specifically, say University of California, Berkeley, postdoctoral
fellow Arjen Stolk and his Dutch colleagues, machines don't
develop a shared understanding of the people, place and
situation - often including a long social history - that is key to
human communication. Without such common ground, a
computer cannot help but be confused.
"People tend to think of communication as an exchange of
linguistic signs or gestures, forgetting that much of
communication is about the social context, about who you are
communicating with," Stolk said.
The word "bank," for example, would be interpreted one way if
you're holding a credit card but a different way if you're holding a
fishing pole. Without context, making a "V" with two fingers
could mean victory, the number two, or "these are the two
fingers I broke."
"All these subtleties are quite crucial to understanding one
another," Stolk said, perhaps more so than the words and signals
that computers and many neuroscientists focus on as the key to
communication. "In fact, we can understand one another without
language, without words and signs that already have a shared
meaning."
(Adapted from http://www.sciencedaily.com/releases/2016/01/1
60111135231.htm)
READ TEXT II AND ANSWER QUESTIONS 21 TO 25:
TEXT II
The backlash against big data
[…]
Big data refers to the idea that society can do things with a large
body of data that weren't possible when working with smaller
amounts. The term was originally applied a decade ago to
massive datasets from astrophysics, genomics and internet
search engines, and to machine-learning systems (for voicerecognition
and translation, for example) that work
well only when given lots of data to chew on. Now it refers to the
application of data-analysis and statistics in new areas, from
retailing to human resources. The backlash began in mid-March,
prompted by an article in Science by David Lazer and others at
Harvard and Northeastern University. It showed that a big-data
poster-child—Google Flu Trends, a 2009 project which identified
flu outbreaks from search queries alone—had overestimated the
number of cases for four years running, compared with reported
data from the Centres for Disease Control (CDC). This led to a
wider attack on the idea of big data.
The criticisms fall into three areas that are not intrinsic to big
data per se, but endemic to data analysis, and have some merit.
First, there are biases inherent to data that must not be ignored.
That is undeniably the case. Second, some proponents of big data
have claimed that theory (ie, generalisable models about how the
world works) is obsolete. In fact, subject-area knowledge remains
necessary even when dealing with large data sets. Third, the risk
of spurious correlations—associations that are statistically robust
but happen only by chance—increases with more data. Although
there are new statistical techniques to identify and banish
spurious correlations, such as running many tests against subsets
of the data, this will always be a problem.
There is some merit to the naysayers' case, in other words. But
these criticisms do not mean that big-data analysis has no merit
whatsoever. Even the Harvard researchers who decried big data
"hubris" admitted in Science that melding Google Flu Trends
analysis with CDC's data improved the overall forecast—showing
that big data can in fact be a useful tool. And research published
in PLOS Computational Biology on April 17th shows it is possible
to estimate the prevalence of the flu based on visits to Wikipedia
articles related to the illness. Behind the big data backlash is the
classic hype cycle, in which a technology's early proponents make
overly grandiose claims, people sling arrows when those
promises fall flat, but the technology eventually transforms the
world, though not necessarily in ways the pundits expected. It
happened with the web, and television, radio, motion pictures
and the telegraph before it. Now it is simply big data's turn to
face the grumblers.
(From http://www.economist.com/blogs/economist explains/201
4/04/economist-explains-10)
READ TEXT II AND ANSWER THE QUESTION.