Ir para o conteúdo principal
Milhares de questões atuais de concursos.

Questões de Concurso – Aprova Concursos

Milhares de questões com o conteúdo atualizado para você praticar e chegar ao dia da prova preparado!


Exibir questões com:
Não exibir questões:
Minhas questões:
Filtros aplicados:

Dica: Caso encontre poucas questões de uma prova específica, filtre pela banca organizadora do concurso que você deseja prestar.

Exibindo questões de 10 encontradas. Imprimir página Salvar em Meus Filtros
Folha de respostas:

  • 1
    • a
    • b
    • c
    • d
    • e
  • 2
    • a
    • b
    • c
    • d
    • e
  • 3
    • a
    • b
    • c
    • d
    • e
  • 4
    • a
    • b
    • c
    • d
    • e
  • 5
    • a
    • b
    • c
    • d
    • e
  • 6
    • a
    • b
    • c
    • d
    • e
  • 7
    • a
    • b
    • c
    • d
    • e
  • 8
    • a
    • b
    • c
    • d
    • e
  • 9
    • a
    • b
    • c
    • d
    • e
  • 10
    • a
    • b
    • c
    • d
    • e

READ TEXT I AND ANSWER QUESTIONS 16 TO 20

TEXT I

Will computers ever truly understand what we're saying?

Date: January 11, 2016

Source University of California - Berkeley

Summary:

If you think computers are quickly approaching true human

communication, think again. Computers like Siri often get

confused because they judge meaning by looking at a word's

statistical regularity. This is unlike humans, for whom context is

more important than the word or signal, according to a

researcher who invented a communication game allowing only

nonverbal cues, and used it to pinpoint regions of the brain where

mutual understanding takes place.

From Apple's Siri to Honda's robot Asimo, machines seem to be

getting better and better at communicating with humans. But

some neuroscientists caution that today's computers will never

truly understand what we're saying because they do not take into

account the context of a conversation the way people do.

Specifically, say University of California, Berkeley, postdoctoral

fellow Arjen Stolk and his Dutch colleagues, machines don't

develop a shared understanding of the people, place and

situation - often including a long social history - that is key to

human communication. Without such common ground, a

computer cannot help but be confused.

"People tend to think of communication as an exchange of

linguistic signs or gestures, forgetting that much of

communication is about the social context, about who you are

communicating with," Stolk said.

The word "bank," for example, would be interpreted one way if

you're holding a credit card but a different way if you're holding a

fishing pole. Without context, making a "V" with two fingers

could mean victory, the number two, or "these are the two

fingers I broke."

"All these subtleties are quite crucial to understanding one

another," Stolk said, perhaps more so than the words and signals

that computers and many neuroscientists focus on as the key to

communication. "In fact, we can understand one another without

language, without words and signs that already have a shared

meaning."

(Adapted from http://www.sciencedaily.com/releases/2016/01/1

60111135231.htm)

The word “so” in “perhaps more so than the words and signals” is used to refer to something already stated in Text I. In this context, it refers to:

READ TEXT I AND ANSWER QUESTIONS 16 TO 20

TEXT I

Will computers ever truly understand what we're saying?

Date: January 11, 2016

Source University of California - Berkeley

Summary:

If you think computers are quickly approaching true human

communication, think again. Computers like Siri often get

confused because they judge meaning by looking at a word's

statistical regularity. This is unlike humans, for whom context is

more important than the word or signal, according to a

researcher who invented a communication game allowing only

nonverbal cues, and used it to pinpoint regions of the brain where

mutual understanding takes place.

From Apple's Siri to Honda's robot Asimo, machines seem to be

getting better and better at communicating with humans. But

some neuroscientists caution that today's computers will never

truly understand what we're saying because they do not take into

account the context of a conversation the way people do.

Specifically, say University of California, Berkeley, postdoctoral

fellow Arjen Stolk and his Dutch colleagues, machines don't

develop a shared understanding of the people, place and

situation - often including a long social history - that is key to

human communication. Without such common ground, a

computer cannot help but be confused.

“People tend to think of communication as an exchange of

linguistic signs or gestures, forgetting that much of

communication is about the social context, about who you are

communicating with," Stolk said.

The word “bank," for example, would be interpreted one way if

you're holding a credit card but a different way if you're holding a

fishing pole. Without context, making a “V" with two fingers

could mean victory, the number two, or “these are the two

fingers I broke."

“All these subtleties are quite crucial to understanding one

another," Stolk said, perhaps more so than the words and signals

that computers and many neuroscientists focus on as the key to

communication. “In fact, we can understand one another without

language, without words and signs that already have a shared

meaning."

(Adapted from http://www.sciencedaily.com/releases/2016/01/1

60111135231.htm)

If you are holding a fishing pole, the word “bank” means a:

READ TEXT II AND ANSWER QUESTIONS 21 TO 25:

TEXT II

The backlash against big data

[…]

Big data refers to the idea that society can do things with a large

body of data that weren't possible when working with smaller

amounts. The term was originally applied a decade ago to

massive datasets from astrophysics, genomics and internet

search engines, and to machine-learning systems (for voicerecognition

and translation, for example) that work

well only when given lots of data to chew on. Now it refers to the

application of data-analysis and statistics in new areas, from

retailing to human resources. The backlash began in mid-March,

prompted by an article in Science by David Lazer and others at

Harvard and Northeastern University. It showed that a big-data

poster-child—Google Flu Trends, a 2009 project which identified

flu outbreaks from search queries alone—had overestimated the

number of cases for four years running, compared with reported

data from the Centres for Disease Control (CDC). This led to a

wider attack on the idea of big data.

The criticisms fall into three areas that are not intrinsic to big

data per se, but endemic to data analysis, and have some merit.

First, there are biases inherent to data that must not be ignored.

That is undeniably the case. Second, some proponents of big data

have claimed that theory (ie, generalisable models about how the

world works) is obsolete. In fact, subject-area knowledge remains

necessary even when dealing with large data sets. Third, the risk

of spurious correlations—associations that are statistically robust

but happen only by chance—increases with more data. Although

there are new statistical techniques to identify and banish

spurious correlations, such as running many tests against subsets

of the data, this will always be a problem.

There is some merit to the naysayers' case, in other words. But

these criticisms do not mean that big-data analysis has no merit

whatsoever. Even the Harvard researchers who decried big data

"hubris" admitted in Science that melding Google Flu Trends

analysis with CDC's data improved the overall forecast—showing

that big data can in fact be a useful tool. And research published

in PLOS Computational Biology on April 17th shows it is possible

to estimate the prevalence of the flu based on visits to Wikipedia

articles related to the illness. Behind the big data backlash is the

classic hype cycle, in which a technology's early proponents make

overly grandiose claims, people sling arrows when those

promises fall flat, but the technology eventually transforms the

world, though not necessarily in ways the pundits expected. It

happened with the web, and television, radio, motion pictures

and the telegraph before it. Now it is simply big data's turn to

face the grumblers.

(From http://www.economist.com/blogs/economist explains/201

4/04/economist-explains-10)

When Text II mentions “grumblers” in “to face the grumblers”, it refers to:

READ TEXT I AND ANSWER QUESTIONS 16 TO 20

TEXT I

Will computers ever truly understand what we're saying?

Date: January 11, 2016

Source University of California - Berkeley

Summary:

If you think computers are quickly approaching true human

communication, think again. Computers like Siri often get

confused because they judge meaning by looking at a word's

statistical regularity. This is unlike humans, for whom context is

more important than the word or signal, according to a

researcher who invented a communication game allowing only

nonverbal cues, and used it to pinpoint regions of the brain where

mutual understanding takes place.

From Apple's Siri to Honda's robot Asimo, machines seem to be

getting better and better at communicating with humans. But

some neuroscientists caution that today's computers will never

truly understand what we're saying because they do not take into

account the context of a conversation the way people do.

Specifically, say University of California, Berkeley, postdoctoral

fellow Arjen Stolk and his Dutch colleagues, machines don't

develop a shared understanding of the people, place and

situation - often including a long social history - that is key to

human communication. Without such common ground, a

computer cannot help but be confused.

“People tend to think of communication as an exchange of

linguistic signs or gestures, forgetting that much of

communication is about the social context, about who you are

communicating with," Stolk said.

The word “bank," for example, would be interpreted one way if

you're holding a credit card but a different way if you're holding a

fishing pole. Without context, making a “V" with two fingers

could mean victory, the number two, or “these are the two

fingers I broke."

“All these subtleties are quite crucial to understanding one

another," Stolk said, perhaps more so than the words and signals

that computers and many neuroscientists focus on as the key to

communication. “In fact, we can understand one another without

language, without words and signs that already have a shared

meaning."

(Adapted from http://www.sciencedaily.com/releases/2016/01/1

60111135231.htm)

According to the researchers from the University of California, Berkeley:

READ TEXT II AND ANSWER QUESTIONS 21 TO 25:

TEXT II

The backlash against big data

[…]

Big data refers to the idea that society can do things with a large

body of data that weren't possible when working with smaller

amounts. The term was originally applied a decade ago to

massive datasets from astrophysics, genomics and internet

search engines, and to machine-learning systems (for voicerecognition

and translation, for example) that work

well only when given lots of data to chew on. Now it refers to the

application of data-analysis and statistics in new areas, from

retailing to human resources. The backlash began in mid-March,

prompted by an article in Science by David Lazer and others at

Harvard and Northeastern University. It showed that a big-data

poster-child—Google Flu Trends, a 2009 project which identified

flu outbreaks from search queries alone—had overestimated the

number of cases for four years running, compared with reported

data from the Centres for Disease Control (CDC). This led to a

wider attack on the idea of big data.

The criticisms fall into three areas that are not intrinsic to big

data per se, but endemic to data analysis, and have some merit.

First, there are biases inherent to data that must not be ignored.

That is undeniably the case. Second, some proponents of big data

have claimed that theory (ie, generalisable models about how the

world works) is obsolete. In fact, subject-area knowledge remains

necessary even when dealing with large data sets. Third, the risk

of spurious correlations—associations that are statistically robust

but happen only by chance—increases with more data. Although

there are new statistical techniques to identify and banish

spurious correlations, such as running many tests against subsets

of the data, this will always be a problem.

There is some merit to the naysayers' case, in other words. But

these criticisms do not mean that big-data analysis has no merit

whatsoever. Even the Harvard researchers who decried big data

"hubris" admitted in Science that melding Google Flu Trends

analysis with CDC's data improved the overall forecast—showing

that big data can in fact be a useful tool. And research published

in PLOS Computational Biology on April 17th shows it is possible

to estimate the prevalence of the flu based on visits to Wikipedia

articles related to the illness. Behind the big data backlash is the

classic hype cycle, in which a technology's early proponents make

overly grandiose claims, people sling arrows when those

promises fall flat, but the technology eventually transforms the

world, though not necessarily in ways the pundits expected. It

happened with the web, and television, radio, motion pictures

and the telegraph before it. Now it is simply big data's turn to

face the grumblers.

(From http://www.economist.com/blogs/economist explains/201

4/04/economist-explains-10)

The base form, past tense and past participle of the verb “fall” in “The criticisms fall into three areas” are, respectively:

READ TEXT II AND ANSWER QUESTIONS 21 TO 25:

TEXT II

The backlash against big data

[…]

Big data refers to the idea that society can do things with a large

body of data that weren't possible when working with smaller

amounts. The term was originally applied a decade ago to

massive datasets from astrophysics, genomics and internet

search engines, and to machine-learning systems (for voicerecognition

and translation, for example) that work

well only when given lots of data to chew on. Now it refers to the

application of data-analysis and statistics in new areas, from

retailing to human resources. The backlash began in mid-March,

prompted by an article in Science by David Lazer and others at

Harvard and Northeastern University. It showed that a big-data

poster-child—Google Flu Trends, a 2009 project which identified

flu outbreaks from search queries alone—had overestimated the

number of cases for four years running, compared with reported

data from the Centres for Disease Control (CDC). This led to a

wider attack on the idea of big data.

The criticisms fall into three areas that are not intrinsic to big

data per se, but endemic to data analysis, and have some merit.

First, there are biases inherent to data that must not be ignored.

That is undeniably the case. Second, some proponents of big data

have claimed that theory (ie, generalisable models about how the

world works) is obsolete. In fact, subject-area knowledge remains

necessary even when dealing with large data sets. Third, the risk

of spurious correlations—associations that are statistically robust

but happen only by chance—increases with more data. Although

there are new statistical techniques to identify and banish

spurious correlations, such as running many tests against subsets

of the data, this will always be a problem.

There is some merit to the naysayers' case, in other words. But

these criticisms do not mean that big-data analysis has no merit

whatsoever. Even the Harvard researchers who decried big data

"hubris" admitted in Science that melding Google Flu Trends

analysis with CDC's data improved the overall forecast—showing

that big data can in fact be a useful tool. And research published

in PLOS Computational Biology on April 17th shows it is possible

to estimate the prevalence of the flu based on visits to Wikipedia

articles related to the illness. Behind the big data backlash is the

classic hype cycle, in which a technology's early proponents make

overly grandiose claims, people sling arrows when those

promises fall flat, but the technology eventually transforms the

world, though not necessarily in ways the pundits expected. It

happened with the web, and television, radio, motion pictures

and the telegraph before it. Now it is simply big data's turn to

face the grumblers.

(From http://www.economist.com/blogs/economist explains/201

4/04/economist-explains-10)

The phrase “lots of data to chew on” in Text II makes use of figurative language and shares some common characteristics with:

READ TEXT I AND ANSWER QUESTIONS 16 TO 20

TEXT I

Will computers ever truly understand what we're saying?

Date: January 11, 2016

Source University of California - Berkeley

Summary:

If you think computers are quickly approaching true human

communication, think again. Computers like Siri often get

confused because they judge meaning by looking at a word's

statistical regularity. This is unlike humans, for whom context is

more important than the word or signal, according to a

researcher who invented a communication game allowing only

nonverbal cues, and used it to pinpoint regions of the brain where

mutual understanding takes place.

From Apple's Siri to Honda's robot Asimo, machines seem to be

getting better and better at communicating with humans. But

some neuroscientists caution that today's computers will never

truly understand what we're saying because they do not take into

account the context of a conversation the way people do.

Specifically, say University of California, Berkeley, postdoctoral

fellow Arjen Stolk and his Dutch colleagues, machines don't

develop a shared understanding of the people, place and

situation - often including a long social history - that is key to

human communication. Without such common ground, a

computer cannot help but be confused.

“People tend to think of communication as an exchange of

linguistic signs or gestures, forgetting that much of

communication is about the social context, about who you are

communicating with," Stolk said.

The word “bank," for example, would be interpreted one way if

you're holding a credit card but a different way if you're holding a

fishing pole. Without context, making a “V" with two fingers

could mean victory, the number two, or “these are the two

fingers I broke."

“All these subtleties are quite crucial to understanding one

another," Stolk said, perhaps more so than the words and signals

that computers and many neuroscientists focus on as the key to

communication. “In fact, we can understand one another without

language, without words and signs that already have a shared

meaning."

(Adapted from http://www.sciencedaily.com/releases/2016/01/1

60111135231.htm)

Based on the summary provided for Text I, mark the statements below as TRUE (T ) or FALSE (F ). ( ) Contextual clues are still not accounted for by computers. ( ) Computers are unreliable because they focus on language patterns. ( ) A game has been invented based on the words people use. The statements are, respectively:

READ TEXT II AND ANSWER QUESTIONS 21 TO 25:

TEXT II

The backlash against big data

[…]

Big data refers to the idea that society can do things with a large

body of data that weren't possible when working with smaller

amounts. The term was originally applied a decade ago to

massive datasets from astrophysics, genomics and internet

search engines, and to machine-learning systems (for voicerecognition

and translation, for example) that work

well only when given lots of data to chew on. Now it refers to the

application of data-analysis and statistics in new areas, from

retailing to human resources. The backlash began in mid-March,

prompted by an article in Science by David Lazer and others at

Harvard and Northeastern University. It showed that a big-data

poster-child—Google Flu Trends, a 2009 project which identified

flu outbreaks from search queries alone—had overestimated the

number of cases for four years running, compared with reported

data from the Centres for Disease Control (CDC). This led to a

wider attack on the idea of big data.

The criticisms fall into three areas that are not intrinsic to big

data per se, but endemic to data analysis, and have some merit.

First, there are biases inherent to data that must not be ignored.

That is undeniably the case. Second, some proponents of big data

have claimed that theory (ie, generalisable models about how the

world works) is obsolete. In fact, subject-area knowledge remains

necessary even when dealing with large data sets. Third, the risk

of spurious correlations—associations that are statistically robust

but happen only by chance—increases with more data. Although

there are new statistical techniques to identify and banish

spurious correlations, such as running many tests against subsets

of the data, this will always be a problem.

There is some merit to the naysayers' case, in other words. But

these criticisms do not mean that big-data analysis has no merit

whatsoever. Even the Harvard researchers who decried big data

"hubris" admitted in Science that melding Google Flu Trends

analysis with CDC's data improved the overall forecast—showing

that big data can in fact be a useful tool. And research published

in PLOS Computational Biology on April 17th shows it is possible

to estimate the prevalence of the flu based on visits to Wikipedia

articles related to the illness. Behind the big data backlash is the

classic hype cycle, in which a technology's early proponents make

overly grandiose claims, people sling arrows when those

promises fall flat, but the technology eventually transforms the

world, though not necessarily in ways the pundits expected. It

happened with the web, and television, radio, motion pictures

and the telegraph before it. Now it is simply big data's turn to

face the grumblers.

(From http://www.economist.com/blogs/economist explains/201

4/04/economist-explains-10)

The three main arguments against big data raised by Text II in the second paragraph are:

READ TEXT I AND ANSWER QUESTIONS 16 TO 20

TEXT I

Will computers ever truly understand what we're saying?

Date: January 11, 2016

Source University of California - Berkeley

Summary:

If you think computers are quickly approaching true human

communication, think again. Computers like Siri often get

confused because they judge meaning by looking at a word's

statistical regularity. This is unlike humans, for whom context is

more important than the word or signal, according to a

researcher who invented a communication game allowing only

nonverbal cues, and used it to pinpoint regions of the brain where

mutual understanding takes place.

From Apple's Siri to Honda's robot Asimo, machines seem to be

getting better and better at communicating with humans. But

some neuroscientists caution that today's computers will never

truly understand what we're saying because they do not take into

account the context of a conversation the way people do.

Specifically, say University of California, Berkeley, postdoctoral

fellow Arjen Stolk and his Dutch colleagues, machines don't

develop a shared understanding of the people, place and

situation - often including a long social history - that is key to

human communication. Without such common ground, a

computer cannot help but be confused.

“People tend to think of communication as an exchange of

linguistic signs or gestures, forgetting that much of

communication is about the social context, about who you are

communicating with," Stolk said.

The word “bank," for example, would be interpreted one way if

you're holding a credit card but a different way if you're holding a

fishing pole. Without context, making a “V" with two fingers

could mean victory, the number two, or “these are the two

fingers I broke."

“All these subtleties are quite crucial to understanding one

another," Stolk said, perhaps more so than the words and signals

that computers and many neuroscientists focus on as the key to

communication. “In fact, we can understand one another without

language, without words and signs that already have a shared

meaning."

(Adapted from http://www.sciencedaily.com/releases/2016/01/1

60111135231.htm)

The title of Text I reveals that the author of this text is:

READ TEXT II AND ANSWER QUESTIONS 21 TO 25:

TEXT II

The backlash against big data

[…]

Big data refers to the idea that society can do things with a large

body of data that weren't possible when working with smaller

amounts. The term was originally applied a decade ago to

massive datasets from astrophysics, genomics and internet

search engines, and to machine-learning systems (for voicerecognition

and translation, for example) that work

well only when given lots of data to chew on. Now it refers to the

application of data-analysis and statistics in new areas, from

retailing to human resources. The backlash began in mid-March,

prompted by an article in Science by David Lazer and others at

Harvard and Northeastern University. It showed that a big-data

poster-child—Google Flu Trends, a 2009 project which identified

flu outbreaks from search queries alone—had overestimated the

number of cases for four years running, compared with reported

data from the Centres for Disease Control (CDC). This led to a

wider attack on the idea of big data.

The criticisms fall into three areas that are not intrinsic to big

data per se, but endemic to data analysis, and have some merit.

First, there are biases inherent to data that must not be ignored.

That is undeniably the case. Second, some proponents of big data

have claimed that theory (ie, generalisable models about how the

world works) is obsolete. In fact, subject-area knowledge remains

necessary even when dealing with large data sets. Third, the risk

of spurious correlations—associations that are statistically robust

but happen only by chance—increases with more data. Although

there are new statistical techniques to identify and banish

spurious correlations, such as running many tests against subsets

of the data, this will always be a problem.

There is some merit to the naysayers' case, in other words. But

these criticisms do not mean that big-data analysis has no merit

whatsoever. Even the Harvard researchers who decried big data

"hubris" admitted in Science that melding Google Flu Trends

analysis with CDC's data improved the overall forecast—showing

that big data can in fact be a useful tool. And research published

in PLOS Computational Biology on April 17th shows it is possible

to estimate the prevalence of the flu based on visits to Wikipedia

articles related to the illness. Behind the big data backlash is the

classic hype cycle, in which a technology's early proponents make

overly grandiose claims, people sling arrows when those

promises fall flat, but the technology eventually transforms the

world, though not necessarily in ways the pundits expected. It

happened with the web, and television, radio, motion pictures

and the telegraph before it. Now it is simply big data's turn to

face the grumblers.

(From http://www.economist.com/blogs/economist explains/201

4/04/economist-explains-10)

The use of the phrase “the backlash” in the title of Text II means the:

© Aprova Concursos - Al. Dr. Carlos de Carvalho, 1482 - Curitiba, PR - 0800 727 6282