Dog Brothers Public Forum

Politics, Religion, Science, Culture and Humanities => Science, Culture, & Humanities => Topic started by: Crafty_Dog on July 26, 2009, 04:43:46 AM

Title: Intelligence and Psychology, Artificial Intelligence
Post by: Crafty_Dog on July 26, 2009, 04:43:46 AM
Scientists worry machines may outsmart man
By JOHN MARKOFF
Published: July 25, 2009
A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

Their concern is that further advances could create profound social disruptions and even have dangerous consequences.

As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence.

While the computer scientists agreed that we are a long way from Hal, the computer that took over the spaceship in “2001: A Space Odyssey,” they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.

The researchers — leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California — generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon.

They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?

The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world.

A report from the conference, which took place in private on Feb. 25, is to be issued later this year. Some attendees discussed the meeting for the first time with other scientists this month and in interviews.

The conference was organized by the Association for the Advancement of Artificial Intelligence, and in choosing Asilomar for the discussions, the group purposefully evoked a landmark event in the history of science. In 1975, the world’s leading biologists also met at Asilomar to discuss the new ability to reshape life by swapping genetic material among organisms. Concerned about possible biohazards and ethical questions, scientists had halted certain experiments. The conference led to guidelines for recombinant DNA research, enabling experimentation to continue.

The meeting on the future of artificial intelligence was organized by Eric Horvitz, a Microsoft researcher who is now president of the association.

Dr. Horvitz said he believed computer scientists must respond to the notions of superintelligent machines and artificial intelligence systems run amok.

The idea of an “intelligence explosion” in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. Later, in lectures and science fiction novels, the computer scientist Vernor Vinge popularized the notion of a moment when humans will create smarter-than-human machines, causing such rapid change that the “human era will be ended.” He called this shift the Singularity.

This vision, embraced in movies and literature, is seen as plausible and unnerving by some scientists like William Joy, co-founder of Sun Microsystems. Other technologists, notably Raymond Kurzweil, have extolled the coming of ultrasmart machines, saying they will offer huge advances in life extension and wealth creation.

“Something new has taken place in the past five to eight years,” Dr. Horvitz said. “Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.”

The Kurzweil version of technological utopia has captured imaginations in Silicon Valley. This summer an organization called the Singularity University began offering courses to prepare a “cadre” to shape the advances and help society cope with the ramifications.

“My sense was that sooner or later we would have to make some sort of statement or assessment, given the rising voice of the technorati and people very concerned about the rise of intelligent machines,” Dr. Horvitz said.

The A.A.A.I. report will try to assess the possibility of “the loss of human control of computer-based intelligences.” It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?

Dr. Horvitz said the panel was looking for ways to guide research so that technology improved society rather than moved it toward a technological catastrophe. Some research might, for instance, be conducted in a high-security laboratory.

The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.

“If you wait too long and the sides become entrenched like with G.M.O.,” he said, referring to genetically modified foods, “then it is very difficult. It’s too complex, and people talk right past each other.”

Tom Mitchell, a professor of artificial intelligence and machine learning at Carnegie Mellon University, said the February meeting had changed his thinking. “I went in very optimistic about the future of A.I. and thinking that Bill Joy and Ray Kurzweil were far off in their predictions,” he said. But, he added, “The meeting made me want to be more outspoken about these issues and in particular be outspoken about the vast amounts of data collected about our personal lives.”

Despite his concerns, Dr. Horvitz said he was hopeful that artificial intelligence research would benefit humans, and perhaps even compensate for human failings. He recently demonstrated a voice-based system that he designed to ask patients about their symptoms and to respond with empathy. When a mother said her child was having diarrhea, the face on the screen said, “Oh no, sorry to hear that.”

A physician told him afterward that it was wonderful that the system responded to human emotion. “That’s a great idea,” Dr. Horvitz said he was told. “I have no time for that.”

Ken Conley/Willow Garage
Title: Creativity as the Necessary Ingredient
Post by: Body-by-Guinness on August 20, 2009, 09:39:36 AM
What Makes A Genius?

By Andrea Kuszewski
Created Aug 20 2009 - 2:26am
What is the difference between "intelligence" and "genius"?  Creativity, of course!

There was an article recently in Scientific American that discussed creativity and the signs in children that were precursors to creative achievement in adulthood. The authors cite some work done by Michigan State University researchers Robert and Michele Root-Bernstein, a collaboration of physiologist and theater instructor, who presented their findings at an annual meeting of the APA this past March. Since I research creativity as well as intelligence, I found the points brought up in the article quite intriguing, yet not surprising.

One of the best observations stated in the article regarding achievement was this:
"... most highly creative people are polymaths- they enjoy and excel at a range of challenging activities. For instance, in a survey of scientists at all levels of achievement, the [researchers] found that only about one sixth report engaging in a secondary activity of an artistic or creative nature, such as painting or writing non-scientific prose. In contrast, nearly all Nobel Prize winners in science have at least one other creative activity that they pursue seriously. Creative breadth, the [researchers] argue, is an important but understudied component of genius."

Everyone is fascinated by famous geniuses like Albert Einstein. They speculate as to what made him so unique and brilliant, but no one has been able to identify exactly what "it" is. If you mention "intelligence research", the average person assumes you are speaking of that top 1 or 2%, the IQs over 145, the little kids you see on TV passing out during Spelling Bees, because they are freaking out from the pressure of having to spell antidisestablishmentarianism on a stage before hundreds of on-lookers.

But the reality is, most intelligence researchers don't focus on the top 1 or 2%, they look at the general population, of which the average score is 100, and generally focus their attention on the lower to middle portion of the distribution.

There may be a multitude of reasons why most researchers focus their study on the lower end of the distribution; one I can see is because the correlations between individual abilities measured on IQ tests and the actual overall ability level of the person taking the test are the strongest at that portion of the distribution- those IQ scores of 110 and below.

The point I just made I have made before (which you will recognize if you read any of my pieces on intelligence), so nothing new there. However, what I found especially promising about the work done by the Root-Bernsteins, is that instead of merely trying to analyze IQ scores, they actually looked at the attributes of successful, intelligent, creative people, and figured out what it was they had going for them that other highly intelligent people did not- essentially, what the difference was between "intelligent" and "genius".

(the paper abstracts from the symposium describing their methods can be read here)

Now, some hard-core statistician-types may balk at their methods, screaming, "Case studies are not valid measures of intelligence!" and to a certain degree, they have a point. Yes, they initially looked at case studies of successful individuals, but then they surveyed scientists across multiple fields and found that the highest achievers in their domain (as indicated by earning the Nobel Prize) were skilled in multiple domains, at least one of these considered to be "creative", such as music, art, or non-scientific writing.

We would probably consider most scientists to be intelligent. But are they all geniuses? Do geniuses have the highest IQ scores? Richard Feynman is undeniably considered to be a genius. While his IQ score was *only* around 120-125, he was also an artist and a gifted communicator. Was he less intelligent than someone with an IQ score of 150?

What we are doing here is challenging the very definition of "intelligence". What is it really? An IQ score? Computational ability? Being able to talk your way out of a speeding ticket? Knowing how to handle crisis effectively? Arguing a convincing case before a jury? Well, maybe all of the above.

Many moons ago, Dr Robert Sternberg, now the Dean of Arts and Sciences at Tufts University in Boston, brought this very argument to the psychology community. And, to be honest, it was not exactly welcomed with open arms. He believed that intelligence is comprised of three facets, only one of which is measured on a typical IQ test, including the SAT and the GRE. That is only the first part, analytical ability. The second component is creativity, and the third component is practical ability, or being able to use your analytical skills and your creativity in order to effectively solve novel problems. He called this the Triarchic Theory of Intelligence.

Fast-forwarding to the present, Dr Rex Jung, from the Mind Institute and the University of New Mexico in Alburquerque, published a paper earlier this year showing biochemical support for the Threshold Theory of Creativity (a necessary but sufficient level of intelligence is needed for successful creative achievement). In a nutshell, he found that intelligence (as most people measure it today) is not enough to set a person apart and rise them to the level of genius. Creativity is that essential component that not all intelligent people possess, but geniuses require. Not all creative people are geniuses (thus the Threshold Theory), but in order to reach genius status, creativity is a necessary attribute.

Someone could have an IQ of 170, yet get lost inside of a paper bag, and not have the ability to hold a conversation with anyone other than a dog. That is not my definition of genius. We want to know what made geniuses like Einstein and Feynman so far ahead of their intelligent scientist peers, and the answer to that is creativity.

I am hoping that as more studies come out stating the importance of multi-disciplinary thinking and collaboration across domains for reaching the highest levels of achievements,  that eventually the science community will fully embrace creativity research and see its validity in the study of successful intelligence. As a society, we already recognize the importance of creativity in innovation and in the arts, so let's take it a step further.

Give creativity the "street cred" it deserves as the defining feature that separates mere intelligence from utter genius.

Psychology
ION Publications LLC
Source URL: http://www.scientificblogging.com/rogue_neuron/what_makes_genius
Title: Bird Intelligence
Post by: Crafty_Dog on December 07, 2010, 04:17:59 PM
http://www.youtube.com/watch?v=efcIsve5wu8&feature=related
Title: Re: Intelligence of crows
Post by: Freki on December 08, 2010, 07:54:31 AM
Crows are amazing

[youtube]http://www.youtube.com/watch?v=NhmZBMuZ6vE[/youtube]
Title: Re: Intelligence
Post by: Crafty_Dog on December 08, 2010, 03:58:06 PM
I liked that Freki.
Title: Re: Intelligence
Post by: G M on December 08, 2010, 04:24:51 PM
I did too. It reminded me of backpacking in an isolated part of the southwest and having curious ravens surveilling me. They'd circle and study. They'd land behind trees and then stealthily hop on the ground to get a closer look. There was a definite sense of some sentient thought from them, and I'm not one for sentimental anthropomorphism.
Title: Re: Intelligence
Post by: Crafty_Dog on December 08, 2010, 07:56:33 PM
Konrad Lorenz wrote quite often of "jackdaws".  This was translated from German.  Does anyone know if this is another word for crows?  or?
Title: Re: Intelligence
Post by: G M on December 08, 2010, 08:15:41 PM
Definition of JACKDAW
1
: a common black and gray bird (Corvus monedula) of Eurasia and northern Africa that is related to but smaller than the carrion crow
2
: grackle 1
Title: Test your power of observation
Post by: Crafty_Dog on January 17, 2011, 09:49:13 AM
http://www.oldjoeblack.0nyx.com/thinktst.htm
Title: Re: Intelligence
Post by: Vicbowling on January 18, 2011, 04:34:32 PM
Very interesting article but I found myself less disturbed by the Terminator-esque prediction of the future, and a little concerned with the fact that the doctor was completely fine with the A.I projecting "human" emotion so he WOULDN'T have to... anyone else find flaw in that?!

soundproofing materials (http://"http://www.soundisolationcompany.com/")

Title: Private Intel firm
Post by: bigdog on January 24, 2011, 02:47:54 AM
http://www.stltoday.com/news/national/article_59308dcd-3092-5280-92fb-898f569504e4.html

Ousted CIA agent runs his own private operation
With U.S. funding cut, he relies on donations to fund his 'operatives' in Pakistan and Afghanistan.

Title: Re: Intelligence
Post by: Crafty_Dog on January 24, 2011, 04:49:25 AM
BD:

That is a different kind of intelligence  :lol:  May I ask you to please post that on the "Intel Matters" thread on the P&R forum?

Thank you,
Title: Re: Intelligence
Post by: bigdog on January 24, 2011, 07:46:58 AM
Whether it "matters" or not, it appears mine was lacking!  Sorry about that, Guro!
Title: Re: Intelligence
Post by: Crafty_Dog on January 25, 2011, 05:42:59 AM
No worries BD; in this context the term "intelligence" was ambiguous. :-)
Title: Eh tu Watson; Kurzweil's singularity
Post by: Crafty_Dog on February 12, 2011, 05:15:48 AM
Computer beats best humans at Jepoardy

http://wattsupwiththat.com/2011/02/10/worth-watching-watson/
===========
On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.

 

On the show (you can find the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200.

 

Kurzweil then demonstrated the computer, which he built himself—a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil's age than by anything he'd actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher.

 

But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It's an act of self-expression; you're not supposed to be able to do it if you don't have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.

 

That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.

 

Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they're getting faster is increasing.

 

True? True.

 

So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.

 

If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville.

 

Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.

 

The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation.

 

People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA. Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there's more to it than they expected. And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language.

 

The Singularity isn't a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an "intelligence explosion":

 

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

 

The word singularity is borrowed from astrophysics: it refers to a point in space-time — for example, inside a black hole — at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good's intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that "within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended."

 

By that time Kurzweil was thinking about the Singularity too. He'd been busy since his appearance on I've Got a Secret. He'd made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind — Stevie Wonder was customer No. 1—and made innovations in a range of technical fields, including music synthesizers and speech recognition. He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology.

 

But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in The Singularity Is Near, which was a best seller when it came out in 2005. A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.) Bill Gates has called him "the best person I know at predicting the future of artificial intelligence."

 

In real life, the transcendent man is an unimposing figure who could pass for Woody Allen's even nerdier younger brother. Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity's most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He's good-natured about it. His manner is almost apologetic: I wish I could bring you less exciting news of the future, but I've looked at the numbers, and this is what they say, so what else can I tell you?

 

Kurzweil's interest in humanity's cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. "Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project," he says. "So it's like skeet shooting—you can't shoot at the target." He knew about Moore's law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It's a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.

 

As it turned out, Kurzweil's numbers looked a lot like Moore's. They doubled every couple of years. Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900.

 

Kurzweil then ran the numbers on a whole bunch of other key technological indexes — the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond—the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. "It's really amazing how smooth these trajectories are," he says. "Through thick and thin, war and peace, boom times and recessions." Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly.

 

Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity. According to Kurzweil, we're not evolved to think in terms of exponential growth. "It's not intuitive. Our built-in predictors are linear. When we're trying to avoid an animal, we pick the linear prediction of where it's going to be in 20 seconds and what to do about it. That is actually hardwired in our brains."

 

Here's what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity—never say he's not conservative—at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today. 

Title: Kurzweil 2
Post by: Crafty_Dog on February 12, 2011, 05:25:45 AM


The Singularity isn't just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.

 

Not all of them are Kurzweilians, not by a long chalk. There's room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.

 

In addition to the Singularity University, which Kurzweil co-founded, there's also a Singularity Institute for Artificial Intelligence, based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.) Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology. 

 

At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James "the Amazing" Randi. The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading—the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters—handed out pamphlets. An android chatted with visitors in one corner.

 

After artificial intelligence, the most talked-about topic at the 2010 summit was life extension. Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It's not just wishful thinking; there's actual science going on here.

 

For example, it's well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can't reproduce anymore and dies. But there's an enzyme called telomerase that reverses this process; it's one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced inNature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn't just get better; they got younger. 

 

Aubrey de Grey is one of the world's best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. "People have begun to realize that the view of aging being something immutable—rather like the heat death of the universe—is simply ridiculous," he says. "It's just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It's really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable."

 

Kurzweil takes life extension seriously too. His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father's genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day. He says his diabetes is essentially cured, and although he's 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.

 

But his goal differs slightly from de Grey's. For Kurzweil, it's not so much about staying healthy as long as possible; it's about staying alive until the Singularity. It's an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they'll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we'll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.

 

It's an idea that's radical and ancient at the same time. In "Sailing to Byzantium," W.B. Yeats describes mankind's fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead? But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves. "There are people who can accept computers being more intelligent than people," he says. "But the idea of significant changes to human longevity—that seems to be particularly controversial. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that's the major reason we have religion."

 

Of course, a lot of people think the Singularity is nonsense — a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience. Most of the serious critics focus on the question of whether a computer can truly become intelligent.

 

The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn't currently produce the kind of intelligence we associate with humans or even with talking computers in movies—HAL or C3PO or Data. Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don't make conversation at parties. They're intelligent, but only if you define intelligence in a vanishingly narrow way. The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn't exist yet.

 

Why not? Obviously we're still waiting on all that exponentially growing computing power to get here. But it's also possible that there are things going on in our brains that can't be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon. The biologist Dennis Bray was one of the few voices of dissent at last summer's Singularity Summit. "Although biological components act in ways that are comparable to those in electronic circuits," he argued, in a talk titled "What Cells Can Do That Robots Can't," "they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events." That makes the ones and zeros that computers trade in look pretty crude.

 

Underlying the practical challenges are a host of philosophical ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being—in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.) Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical automaton without the mysterious spark of consciousness—a machine with no ghost in it? And how would we know?

 

Even if you grant that the Singularity is plausible, you're still staring at a thicket of unanswerable questions. If I can scan my consciousness into a computer, am I still me? What are the geopolitics and the socioeconomics of the Singularity? Who decides who gets to be immortal? Who draws the line between sentient and nonsentient? And as we approach immortality, omniscience and omnipotence, will our lives still have meaning? By beating death, will we have lost our essential humanity?

 

Kurzweil admits that there's a fundamental level of risk associated with the Singularity that's impossible to refine away, simply because we don't know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do. It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don't have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error.

 

If the Singularity is coming, these questions are going to get answers whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by banning technologies is not only impossible but also unethical and probably dangerous. "It would require a totalitarian system to implement such a ban," he says. "It wouldn't work. It would just drive these technologies underground, where the responsible scientists who we're counting on to create the defenses would not have easy access to the tools."

 

Kurzweil is an almost inhumanly patient and thorough debater. He relishes it. He's tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.

 

Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. "Generally speaking," he says, "the core of a disagreement I'll have with a critic is, they'll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don't believe I'm underestimating the challenge. I think they're underestimating the power of exponential growth."

 

This position doesn't make Kurzweil an outlier, at least among Singularitarians. Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It's called the Blue Brain project, and it's an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM's Blue Gene super-computer. So far, Markram's team has managed to simulate one neocortical column from a rat's brain, which contains about 10,000 neurons. Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you'd then have to educate the brain, and who knows how long that would take?)

 

By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it. He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware. "When people look at the implications of ongoing exponential growth, it gets harder and harder to accept," he says. "So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the implications are too fantastic. I've tried to push myself to really look."

 

In Kurzweil's future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level. Progress hyperaccelerates, and every hour brings a century's worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.

 

We can scan our consciousnesses into computers and enter a virtual existence or swap our bodies for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of centuries, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species.

 

Or it isn't. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of nonsentience.

 

But as for the minor questions, they're already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn't have 600 million humans carrying out their social lives over a single electronic network. Now we have Facebook. Five years ago you didn't see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics. Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our skulls?

 

Already 30,000 patients with Parkinson's disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy! Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter. It got every question it answered right, but much more important, it didn't need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn't strong AI, but if strong AI happens, it will arrive gradually, bit by bit, and this will have been one of the bits.

 

A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century's answer to the Founding Fathers — except unlike the Founding Fathers, they'll still be alive to get credit — or their ideas could look as hilariously retro and dated as Disney's Tomorrowland. Nothing gets old as fast as the future.

 

But even if they're dead wrong about the future, they're right about the present. They're taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another. Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box. Or maybe you have to think further inside it than anyone ever has before.
Title: Memory training
Post by: Crafty_Dog on February 20, 2011, 11:18:18 AM

http://www.nytimes.com/interactive/2011/02/20/magazine/mind-secrets.html?nl=todaysheadlines&emc=tha210
Title: WSJ: Watson
Post by: Crafty_Dog on March 14, 2011, 10:58:22 AM


By STEPHEN BAKER
In the weeks since IBM's computer, Watson, thrashed two flesh-and-blood champions in the quiz show "Jeopardy!," human intelligence has been punching back—at least on blogs and opinion pages. Watson doesn't "know" anything, experts say. It doesn't laugh at jokes, cannot carry on a conversation, has no sense of self, and commits bloopers no human would consider. (Toronto, a U.S. city?) What's more, it's horribly inefficient, requiring a roomful of computers to match what we carry between our ears. And it probably would not have won without its inhuman speed on the buzzer.

This is all enough to make you feel reinvigorated to be human. But focusing on Watson's shortcomings misses the point. It risks distracting people from the transformation that Watson all but announced on its "Jeopardy!" debut: These question-answering machines will soon be working alongside us in offices and laboratories, and forcing us to make adjustments in what we learn and how we think. Watson is an early sighting of a highly disruptive force.

The key is to regard these computers not as human wannabes but rather as powerful tools, ones that can handle jobs currently held by people. The "intelligence" of the tools matters little. What counts is the information they deliver.

In our history of making tools, we have long adjusted to the disruptions they cause. Imagine an Italian town in the 17th century. Perhaps there's one man who has a special sense for the weather. Let's call him Luigi. Using his magnificent brain, he picks up on signals—changes in the wind, certain odors, perhaps the flight paths of birds or noises coming from the barn. And he spreads word through the town that rain will be coming in two days, or that a cold front might freeze the crops. Luigi is a valuable member of society.

Along comes a traveling vendor who carries a new instrument invented in 1643 by Evangelista Torricelli. It's a barometer, and it predicts the weather about as well as Luigi. It's certainly not as smart as him, if it can be called smart at all. It has no sense of self, is deaf to the animals in the barn, blind to the flight patterns of birds. Yet it comes up with valuable information.

In a world with barometers, Luigi and similar weather savants must find other work for their fabulous minds. Perhaps using the new tool, they can deepen their analysis of weather patterns, keep careful records and then draw conclusions about optimal farming techniques. They might become consultants. Maybe some of them drop out of the weather business altogether. The new tool creates both displacement and economic opportunity. It forces people to reconsider how they use their heads.

The same is true of Watson and the coming generation of question-answering machines. We can carry on interesting discussions about how "smart" they are or aren't, but that's academic. They make sense of complex questions in English and fetch answers, scoring each one for the machines' level of confidence in it. When asked if Watson can "think," David Ferrucci, IBM's chief scientist on the "Jeopardy!" team, responds: "Can a submarine swim?"

As these computers make their way into law offices, pharmaceutical labs and hospitals, people who currently make a living by answering questions must adjust. They'll have to add value in ways that machines cannot. This raises questions not just for individuals but for entire societies. How do we educate students for a labor market in which machines answer a growing percentage of the questions? How do we create curricula for uniquely human skills, such as generating original ideas, cracking jokes, carrying on meaningful dialogue? How can such lessons be scored and standardized?

These are the challenges before us. They're similar, in a sense, to what we've been facing with globalization. Again we will find ourselves grappling with a new colleague and competitor. This time around, it's a machine. We should scrutinize that tool, focusing on the questions it fails to answer. Its struggles represent a road map for our own cognitive migration. We must go where computers like Watson cannot.

Mr. Baker is the author of "Final Jeopardy—Man vs. Machine and the Quest to Know Everything" (Houghton Mifflin Harcourt, 2011).

Title: Are people getting dumber?
Post by: Crafty_Dog on February 27, 2012, 07:21:59 PM


http://www.nytimes.com/roomfordebate/2012/02/26/are-people-getting-dumber/?nl=todaysheadlines&emc=thab1
Title: Re: Are people getting dumber?
Post by: G M on February 27, 2012, 07:29:47 PM


http://www.nytimes.com/roomfordebate/2012/02/26/are-people-getting-dumber/?nl=todaysheadlines&emc=thab1

Look at who we have as president. Look at those who think if we'd just tax the rich more, all the economic badness would go away. Stupid is growing and California is ground zero for it's spread.
Title: Electrical Brain Stimulation
Post by: Crafty_Dog on May 15, 2012, 10:31:05 AM




http://theweek.com/article/index/226196/how-electrical-brain-stimulation-can-change-the-way-we-think
ESSAY
How electrical brain stimulation can change the way we think
After my brain was jolted, says Sally Adee, I had a near-spiritual experience
PUBLISHED MARCH 30, 2012, AT 10:01 AM

Researchers have found that "transcranial direct current stimulation" can more than double the rate at which people learn a wide range of tasks, such as object recognition, math skills, and marksmanship.   Photo: Adrianna Williams/Corbis
HAVE YOU EVER wanted to take a vacation from your own head? You could do it easily enough with liberal applications of alcohol or hallucinogens, but that's not the kind of vacation I'm talking about. What if you could take a very specific vacation only from the stuff that makes it painful to be you: the sneering inner monologue that insists you're not capable enough or smart enough or pretty enough, or whatever hideous narrative rides you. Now that would be a vacation. You'd still be you, but you'd be able to navigate the world without the emotional baggage that now drags on your every decision. Can you imagine what that would feel like?

Late last year, I got the chance to find out, in the course of investigating a story for New Scientist about how researchers are using neurofeedback and electrical brain stimulation to accelerate learning. What I found was that electricity might be the most powerful drug I've ever used in my life.

It used to be just plain old chemistry that had neuroscientists gnawing their fingernails about the ethics of brain enhancement. As Adderall, Ritalin, and other cognitive enhancing drugs gain widespread acceptance as tools to improve your everyday focus, even the stigma of obtaining them through less-than-legal channels appears to be disappearing. People will overlook a lot of moral gray areas in the quest to juice their brain power.

But until recently, you were out of luck if you wanted to do that without taking drugs that might be addictive, habit-forming, or associated with unfortunate behavioral side effects. Over the past few years, however, it's become increasingly clear that applying an electrical current to your head confers similar benefits.

U.S. military researchers have had great success using "transcranial direct current stimulation" (tDCS) — in which they hook you up to what's essentially a 9-volt battery and let the current flow through your brain. After a few years of lab testing, they've found that tDCS can more than double the rate at which people learn a wide range of tasks, such as object recognition, math skills, and marksmanship.

We don't yet have a commercially available "thinking cap," but we will soon. So the research community has begun to ask: What are the ethics of battery-operated cognitive enhancement? Recently, a group of Oxford neuroscientists released a cautionary statement about the ethics of brain boosting; then the U.K.'s Royal Society released a report that questioned the use of tDCS for military applications. Is brain boosting a fair addition to the cognitive enhancement arms race? Will it create a Morlock/Eloi–like social divide, where the rich can afford to be smarter and everyone else will be left behind? Will Tiger Moms force their lazy kids to strap on a zappity helmet during piano practice?

After trying it myself, I have different questions. To make you understand, I am going to tell you how it felt. The experience wasn't simply about the easy pleasure of undeserved expertise. For me, it was a near-spiritual experience. When a nice neuroscientist named Michael Weisend put the electrodes on me, what defined the experience was not feeling smarter or learning faster: The thing that made the earth drop out from under my feet was that for the first time in my life, everything in my head finally shut up.

The experiment I underwent was accelerated marksmanship training, using a training simulation that the military uses. I spent a few hours learning how to shoot a modified M4 close-range assault rifle, first without tDCS and then with. Without it I was terrible, and when you're terrible at something, all you can do is obsess about how terrible you are. And how much you want to stop doing the thing you are terrible at.

Then this happened:

THE 20 MINUTES I spent hitting targets while electricity coursed through my brain were far from transcendent. I only remember feeling like I'd just had an excellent cup of coffee, but without the caffeine jitters. I felt clear-headed and like myself, just sharper. Calmer. Without fear and without doubt. From there on, I just spent the time waiting for a problem to appear so that I could solve it.

It was only when they turned off the current that I grasped what had just happened. Relieved of the minefield of self-doubt that constitutes my basic personality, I was a hell of a shot. And I can't tell you how stunning it was to suddenly understand just how much of a drag that inner cacophony is on my ability to navigate life and basic tasks.

It's possibly the world's biggest cliché that we're our own worst enemies. In yoga, they tell you that you need to learn to get out of your own way. Practices like yoga are meant to help you exhume the person you are without all the geologic layers of narrative and cross talk that are constantly chattering in your brain. I think eventually they just become background noise. We stop hearing them consciously, but believe me, we listen to them just the same.

My brain without self-doubt was a revelation. There was suddenly this incredible silence in my head; I've experienced something close to it during two-hour Iyengar yoga classes, or at the end of a 10k, but the fragile peace in my head would be shattered almost the second I set foot outside the calm of the studio. I had certainly never experienced instant Zen in the frustrating middle of something I was terrible at.

WHAT HAD HAPPENED inside my skull? One theory is that the mild electrical shock may depolarize the neuronal membranes in the part of the brain associated with object recognition, making the cells more excitable and responsive to inputs. Like many other neuroscientists working with tDCS, Weisend thinks this accelerates the formation of new neural pathways during the time that someone practices a skill, making it easier to get into the "zone." The method he was using on me boosted the speed with which wannabe snipers could detect a threat by a factor of 2.3.

Another possibility is that the electrodes somehow reduce activity in the prefrontal cortex — the area of the brain used in critical thought, says psychologist Mihaly Csikszentmihalyi of Claremont Graduate University in California. And critical thought, some neuroscientists believe, is muted during periods of intense Zen-like concentration. It sounds counterintuitive, but silencing self-critical thoughts might allow more automatic processes to take hold, which would in turn produce that effortless feeling of flow.

With the electrodes on, my constant self-criticism virtually disappeared, I hit every one of the targets, and there were no unpleasant side effects afterwards. The bewitching silence of the tDCS lasted, gradually diminishing over a period of about three days. The inevitable return of self-doubt and inattention was disheartening, to say the least.

I HOPE YOU can sympathize with me when I tell you that the thing I wanted most acutely for the weeks following my experience was to go back and strap on those electrodes. I also started to have a lot of questions. Who was I apart from the angry bitter gnomes that populate my mind and drive me to failure because I'm too scared to try? And where did those voices come from? Some of them are personal history, like the caustically dismissive 7th grade science teacher who advised me to become a waitress. Some of them are societal, like the hateful lady-mag voices that bully me every time I look in a mirror. An invisible narrative informs all my waking decisions in ways I can't even keep track of.

What would a world look like in which we all wore little tDCS headbands that would keep us in a primed, confident state, free of all doubts and fears? I'd wear one at all times and have two in my backpack ready in case something happened to the first one.

I think the ethical questions we should be asking about tDCS are much more subtle than the ones we've been asking about cognitive enhancement. Because how you define "cognitive enhancement" frames the debate about its ethics.

If you told me tDCS would allow someone to study twice as fast for the bar exam, I might be a little leery because now I have visions of rich daddies paying for Junior's thinking cap. Neuroscientists like Roy Hamilton have termed this kind of application "cosmetic neuroscience," which implies a kind of "First World problem" — frivolity.

But now think of a different application — could school-age girls use the zappy cap while studying math to drown out the voices that tell them they can't do math because they're girls? How many studies have found a link between invasive stereotypes and poor test performance?

And then, finally, the main question: What role do doubt and fear play in our lives if their eradication actually causes so many improvements? Do we make more ethical decisions when we listen to our inner voices of self-doubt or when we're freed from them? If we all wore these caps, would the world be a better place?

And if tDCS headwear were to become widespread, would the same 20 minutes with a 2 milliamp current always deliver the same effects, or would you need to up your dose like you do with some other drugs?

Because, to steal a great point from an online commenter, pretty soon, a 9-volt battery may no longer be enough.


©2012 by Sally Adee, reprinted by permission of New Scientist. The full article can be found at NewScientist.com.
Title: The Hazards of Confidence
Post by: Crafty_Dog on May 26, 2012, 12:41:41 PM
October 19, 2011

Don’t Blink! The Hazards of Confidence
By DANIEL KAHNEMAN

Many decades ago I spent what seemed like a great deal of time under a scorching sun, watching groups of sweaty soldiers as they solved a problem. I was doing my national service in the Israeli Army at the time. I had completed an undergraduate degree in psychology, and after a year as an infantry officer, I was assigned to the army’s Psychology Branch, where one of my occasional duties was to help evaluate candidates for officer training. We used methods that were developed by the British Army in World War II.

One test, called the leaderless group challenge, was conducted on an obstacle field. Eight candidates, strangers to one another, with all insignia of rank removed and only numbered tags to identify them, were instructed to lift a long log from the ground and haul it to a wall about six feet high. There, they were told that the entire group had to get to the other side of the wall without the log touching either the ground or the wall, and without anyone touching the wall. If any of these things happened, they were to acknowledge it and start again.

A common solution was for several men to reach the other side by crawling along the log as the other men held it up at an angle, like a giant fishing rod. Then one man would climb onto another’s shoulder and tip the log to the far side. The last two men would then have to jump up at the log, now suspended from the other side by those who had made it over, shinny their way along its length and then leap down safely once they crossed the wall. Failure was common at this point, which required starting over.

As a colleague and I monitored the exercise, we made note of who took charge, who tried to lead but was rebuffed, how much each soldier contributed to the group effort. We saw who seemed to be stubborn, submissive, arrogant, patient, hot-tempered, persistent or a quitter. We sometimes saw competitive spite when someone whose idea had been rejected by the group no longer worked very hard. And we saw reactions to crisis: who berated a comrade whose mistake caused the whole group to fail, who stepped forward to lead when the exhausted team had to start over. Under the stress of the event, we felt, each man’s true nature revealed itself in sharp relief.

After watching the candidates go through several such tests, we had to summarize our impressions of the soldiers’ leadership abilities with a grade and determine who would be eligible for officer training. We spent some time discussing each case and reviewing our impressions. The task was not difficult, because we had already seen each of these soldiers’ leadership skills. Some of the men looked like strong leaders, others seemed like wimps or arrogant fools, others mediocre but not hopeless. Quite a few appeared to be so weak that we ruled them out as officer candidates. When our multiple observations of each candidate converged on a coherent picture, we were completely confident in our evaluations and believed that what we saw pointed directly to the future. The soldier who took over when the group was in trouble and led the team over the wall was a leader at that moment. The obvious best guess about how he would do in training, or in combat, was that he would be as effective as he had been at the wall. Any other prediction seemed inconsistent with what we saw.

Because our impressions of how well each soldier performed were generally coherent and clear, our formal predictions were just as definite. We rarely experienced doubt or conflicting impressions. We were quite willing to declare: “This one will never make it,” “That fellow is rather mediocre, but should do O.K.” or “He will be a star.” We felt no need to question our forecasts, moderate them or equivocate. If challenged, however, we were fully prepared to admit, “But of course anything could happen.”

We were willing to make that admission because, as it turned out, despite our certainty about the potential of individual candidates, our forecasts were largely useless. The evidence was overwhelming. Every few months we had a feedback session in which we could compare our evaluations of future cadets with the judgments of their commanders at the officer-training school. The story was always the same: our ability to predict performance at the school was negligible. Our forecasts were better than blind guesses, but not by much.

We were downcast for a while after receiving the discouraging news. But this was the army. Useful or not, there was a routine to be followed, and there were orders to be obeyed. Another batch of candidates would arrive the next day. We took them to the obstacle field, we faced them with the wall, they lifted the log and within a few minutes we saw their true natures revealed, as clearly as ever. The dismal truth about the quality of our predictions had no effect whatsoever on how we evaluated new candidates and very little effect on the confidence we had in our judgments and predictions.

I thought that what was happening to us was remarkable. The statistical evidence of our failure should have shaken our confidence in our judgments of particular candidates, but it did not. It should also have caused us to moderate our predictions, but it did not. We knew as a general fact that our predictions were little better than random guesses, but we continued to feel and act as if each particular prediction was valid. I was reminded of visual illusions, which remain compelling even when you know that what you see is false. I was so struck by the analogy that I coined a term for our experience: the illusion of validity.

I had discovered my first cognitive fallacy.

Decades later, I can see many of the central themes of my thinking about judgment in that old experience. One of these themes is that people who face a difficult question often answer an easier one instead, without realizing it. We were required to predict a soldier’s performance in officer training and in combat, but we did so by evaluating his behavior over one hour in an artificial situation. This was a perfect instance of a general rule that I call WYSIATI, “What you see is all there is.” We had made up a story from the little we knew but had no way to allow for what we did not know about the individual’s future, which was almost everything that would actually matter. When you know as little as we did, you should not make extreme predictions like “He will be a star.” The stars we saw on the obstacle field were most likely accidental flickers, in which a coincidence of random events — like who was near the wall — largely determined who became a leader. Other events — some of them also random — would determine later success in training and combat.

You may be surprised by our failure: it is natural to expect the same leadership ability to manifest itself in various situations. But the exaggerated expectation of consistency is a common error. We are prone to think that the world is more regular and predictable than it really is, because our memory automatically and continuously maintains a story about what is going on, and because the rules of memory tend to make that story as coherent as possible and to suppress alternatives. Fast thinking is not prone to doubt.

The confidence we experience as we make a judgment is not a reasoned evaluation of the probability that it is right. Confidence is a feeling, one determined mostly by the coherence of the story and by the ease with which it comes to mind, even when the evidence for the story is sparse and unreliable. The bias toward coherence favors overconfidence. An individual who expresses high confidence probably has a good story, which may or may not be true.

I coined the term “illusion of validity” because the confidence we had in judgments about individual soldiers was not affected by a statistical fact we knew to be true — that our predictions were unrelated to the truth. This is not an isolated observation. When a compelling impression of a particular event clashes with general knowledge, the impression commonly prevails. And this goes for you, too. The confidence you will experience in your future judgments will not be diminished by what you just read, even if you believe every word.

I first visited a Wall Street firm in 1984. I was there with my longtime collaborator Amos Tversky, who died in 1996, and our friend Richard Thaler, now a guru of behavioral economics. Our host, a senior investment manager, had invited us to discuss the role of judgment biases in investing. I knew so little about finance at the time that I had no idea what to ask him, but I remember one exchange. “When you sell a stock,” I asked him, “who buys it?” He answered with a wave in the vague direction of the window, indicating that he expected the buyer to be someone else very much like him. That was odd: because most buyers and sellers know that they have the same information as one another, what made one person buy and the other sell? Buyers think the price is too low and likely to rise; sellers think the price is high and likely to drop. The puzzle is why buyers and sellers alike think that the current price is wrong.

Most people in the investment business have read Burton Malkiel’s wonderful book “A Random Walk Down Wall Street.” Malkiel’s central idea is that a stock’s price incorporates all the available knowledge about the value of the company and the best predictions about the future of the stock. If some people believe that the price of a stock will be higher tomorrow, they will buy more of it today. This, in turn, will cause its price to rise. If all assets in a market are correctly priced, no one can expect either to gain or to lose by trading.

We now know, however, that the theory is not quite right. Many individual investors lose consistently by trading, an achievement that a dart-throwing chimp could not match. The first demonstration of this startling conclusion was put forward by Terry Odean, a former student of mine who is now a finance professor at the University of California, Berkeley.

Odean analyzed the trading records of 10,000 brokerage accounts of individual investors over a seven-year period, allowing him to identify all instances in which an investor sold one stock and soon afterward bought another stock. By these actions the investor revealed that he (most of the investors were men) had a definite idea about the future of two stocks: he expected the stock that he bought to do better than the one he sold.

To determine whether those appraisals were well founded, Odean compared the returns of the two stocks over the following year. The results were unequivocally bad. On average, the shares investors sold did better than those they bought, by a very substantial margin: 3.3 percentage points per year, in addition to the significant costs of executing the trades. Some individuals did much better, others did much worse, but the large majority of individual investors would have done better by taking a nap rather than by acting on their ideas. In a paper titled “Trading Is Hazardous to Your Wealth,” Odean and his colleague Brad Barber showed that, on average, the most active traders had the poorest results, while those who traded the least earned the highest returns. In another paper, “Boys Will Be Boys,” they reported that men act on their useless ideas significantly more often than women do, and that as a result women achieve better investment results than men.

Of course, there is always someone on the other side of a transaction; in general, it’s a financial institution or professional investor, ready to take advantage of the mistakes that individual traders make. Further research by Barber and Odean has shed light on these mistakes. Individual investors like to lock in their gains; they sell “winners,” stocks whose prices have gone up, and they hang on to their losers. Unfortunately for them, in the short run going forward recent winners tend to do better than recent losers, so individuals sell the wrong stocks. They also buy the wrong stocks. Individual investors predictably flock to stocks in companies that are in the news. Professional investors are more selective in responding to news. These findings provide some justification for the label of “smart money” that finance professionals apply to themselves.

Although professionals are able to extract a considerable amount of wealth from amateurs, few stock pickers, if any, have the skill needed to beat the market consistently, year after year. The diagnostic for the existence of any skill is the consistency of individual differences in achievement. The logic is simple: if individual differences in any one year are due entirely to luck, the ranking of investors and funds will vary erratically and the year-to-year correlation will be zero. Where there is skill, however, the rankings will be more stable. The persistence of individual differences is the measure by which we confirm the existence of skill among golfers, orthodontists or speedy toll collectors on the turnpike.

Mutual funds are run by highly experienced and hard-working professionals who buy and sell stocks to achieve the best possible results for their clients. Nevertheless, the evidence from more than 50 years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker. At least two out of every three mutual funds underperform the overall market in any given year.

More important, the year-to-year correlation among the outcomes of mutual funds is very small, barely different from zero. The funds that were successful in any given year were mostly lucky; they had a good roll of the dice. There is general agreement among researchers that this is true for nearly all stock pickers, whether they know it or not — and most do not. The subjective experience of traders is that they are making sensible, educated guesses in a situation of great uncertainty. In highly efficient markets, however, educated guesses are not more accurate than blind guesses.

Some years after my introduction to the world of finance, I had an unusual opportunity to examine the illusion of skill up close. I was invited to speak to a group of investment advisers in a firm that provided financial advice and other services to very wealthy clients. I asked for some data to prepare my presentation and was granted a small treasure: a spreadsheet summarizing the investment outcomes of some 25 anonymous wealth advisers, for eight consecutive years. The advisers’ scores for each year were the main determinant of their year-end bonuses. It was a simple matter to rank the advisers by their performance and to answer a question: Did the same advisers consistently achieve better returns for their clients year after year? Did some advisers consistently display more skill than others?

To find the answer, I computed the correlations between the rankings of advisers in different years, comparing Year 1 with Year 2, Year 1 with Year 3 and so on up through Year 7 with Year 8. That yielded 28 correlations, one for each pair of years. While I was prepared to find little year-to-year consistency, I was still surprised to find that the average of the 28 correlations was .01. In other words, zero. The stability that would indicate differences in skill was not to be found. The results resembled what you would expect from a dice-rolling contest, not a game of skill.

No one in the firm seemed to be aware of the nature of the game that its stock pickers were playing. The advisers themselves felt they were competent professionals performing a task that was difficult but not impossible, and their superiors agreed. On the evening before the seminar, Richard Thaler and I had dinner with some of the top executives of the firm, the people who decide on the size of bonuses. We asked them to guess the year-to-year correlation in the rankings of individual advisers. They thought they knew what was coming and smiled as they said, “not very high” or “performance certainly fluctuates.” It quickly became clear, however, that no one expected the average correlation to be zero.

What we told the directors of the firm was that, at least when it came to building portfolios, the firm was rewarding luck as if it were skill. This should have been shocking news to them, but it was not. There was no sign that they disbelieved us. How could they? After all, we had analyzed their own results, and they were certainly sophisticated enough to appreciate their implications, which we politely refrained from spelling out. We all went on calmly with our dinner, and I am quite sure that both our findings and their implications were quickly swept under the rug and that life in the firm went on just as before. The illusion of skill is not only an individual aberration; it is deeply ingrained in the culture of the industry. Facts that challenge such basic assumptions — and thereby threaten people’s livelihood and self-esteem — are simply not absorbed. The mind does not digest them. This is particularly true of statistical studies of performance, which provide general facts that people will ignore if they conflict with their personal experience.

The next morning, we reported the findings to the advisers, and their response was equally bland. Their personal experience of exercising careful professional judgment on complex problems was far more compelling to them than an obscure statistical result. When we were done, one executive I dined with the previous evening drove me to the airport. He told me, with a trace of defensiveness, “I have done very well for the firm, and no one can take that away from me.” I smiled and said nothing. But I thought, privately: Well, I took it away from you this morning. If your success was due mostly to chance, how much credit are you entitled to take for it?

We often interact with professionals who exercise their judgment with evident confidence, sometimes priding themselves on the power of their intuition. In a world rife with illusions of validity and skill, can we trust them? How do we distinguish the justified confidence of experts from the sincere overconfidence of professionals who do not know they are out of their depth? We can believe an expert who admits uncertainty but cannot take expressions of high confidence at face value. As I first learned on the obstacle field, people come up with coherent stories and confident predictions even when they know little or nothing. Overconfidence arises because people are often blind to their own blindness.

True intuitive expertise is learned from prolonged experience with good feedback on mistakes. You are probably an expert in guessing your spouse’s mood from one word on the telephone; chess players find a strong move in a single glance at a complex position; and true legends of instant diagnoses are common among physicians. To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals’ experience and on the quality and speed with which they discover their mistakes. Anesthesiologists have a better chance to develop intuitions than radiologists do. Many of the professionals we encounter easily pass both tests, and their off-the-cuff judgments deserve to be taken seriously. In general, however, you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about. Unfortunately, this advice is difficult to follow: overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion.

Daniel Kahneman is emeritus professor of psychology and of public affairs at Princeton University and a winner of the 2002 Nobel Prize in Economics. This article is adapted from his book “Thinking, Fast and Slow,” out this month from Farrar, Straus & Giroux.
http://www.nytimes.com/2011/10/23/magazine/dont-blink-the-hazards-of-confidence.html?pagewanted=all
Title: Golden Balls variation of Prisoner's Dilema game theory
Post by: Crafty_Dog on June 30, 2012, 01:00:55 PM


http://www.businessinsider.com/golden-balls-game-theory-2012-4
Title: Gorillas dismantle snares
Post by: Crafty_Dog on August 08, 2012, 09:12:43 PM
http://www.redorbit.com/news/science/1112661209/young-gorillas-observed-dismantling-poacher-snares/
Young Gorillas Observed Dismantling Poacher Snares
July 23, 2012
 

Juvenile gorillas from the Kuryama group dismantle a snare in Rwanda's Volcanoes National Park Credit: Dian Fossey Gorilla Fund International


In what can only be described as an impassioned effort to save their own kind from the hand of poachers, two juvenile mountain gorillas have been observed searching out and dismantling manmade traps and snares in their Rwandan forest home, according to a group studying the majestic creatures.

Conservationists working for the Dian Fossey Gorilla Fund International were stunned when they saw Dukore and Rwema, two brave young mountain gorillas, destroying a trap, similar to ones that snared and killed a member of their family less than a week before. Bush-meat hunters set thousands of traps throughout the forests of Rwanda, hoping to catch antelope and other species, but sometimes they capture apes as well.

In an interview with Mark Prigg at The Daily Mail, Erika Archibald, a spokesperson for the Gorilla Fund, said that John Ndayambaje, a tracker for the group, was conducting his regular rounds when he spotted a snare. As he bent down to dismantle it, a silverback from the group rushed him and made a grunting noise that is considered a warning call. A few moments later the two youngsters Dukore and Rwema rushed up to the snare and began to dismantle it on their own.
Then, seconds after destroying the one trap, Archibald continued, Ndayambaje witnessed the pair, along with a third juvenile named Tetero, move to another and dismantle that one as well, one that he had not noticed beforehand. He stood there in amazement.

“We have quite a long record of seeing silverbacks dismantle snares,” Archibald told Prigg. “But we had never seen it passed on to youngsters like that.”  And the youngsters moved “with such speed and purpose and such clarity … knowing,” she added. “This is absolutely the first time that we’ve seen juveniles doing that … I don’t know of any other reports in the world of juveniles destroying snares,” Veronica Vecellio, gorilla program coordinator at the Dian Fossey Gorilla Fund’s Karisoke Research Center, told National Geographic.

Every day trackers from the Karisoke center scour the forest for snares, dismantling any they find in order to protect the endangered mountain gorillas, which the International Fund for the Conservation of Nature (IUCN) says face “a very high risk of extinction in the wild.”

Adults generally have enough strength to free themselves from the snares, but juveniles usually do not, and often die as a result of snare-related wounds. Such was the case of an ensnared infant, Ngwino, found too late by Karisoke workers last week. The infant’s shoulder was dislocated during an escape attempt, and gangrene had set in after the ropes cut deep into her leg.

A snare consists of a noose tied to a branch or a bamboo stalk. The rope is pulled downward, bending the branch, and a rock or bent stick is used to hold the noose to the ground, keeping the branch tight. Then vegetation is placed over the noose to camouflage it. When an animal budges the rock or stick, the branch swings upward and the noose closes around the prey, usually the leg, and, depending on the weight of the animal, is hoisted up into the air.

Vecellio said the speed with which everything happened leads her to believe this wasn’t the first time the juveniles had dismantled a trap.

“They were very confident,” she said. “They saw what they had to do, they did it, and then they left.”

Since gorillas in the Kuryama group have been snared before, Vecellio said it is likely that the juveniles know these snares are dangerous. “That’s why they destroyed them.”

“Chimpanzees are always quoted as being the tool users, but I think, when the situation provides itself, gorillas are quite ingenious” too, said veterinarian Mike Cranfield, executive director of the Mountain Gorilla Veterinary Project.
He speculated that the gorillas may have learned how to destroy the traps by watching the Karisoke trackers. “If we could get more of them doing it, it would be great,” he joked.

But Vecellio said it would go against Karisoke center policies and ethos to actively instruct the apes. “We try as much as we can to not interfere with the gorillas. We don’t want to affect their natural behavior.”

Pictures of the incident have gone viral and numerous fans on the Fund’s Facebook page have shared comments cheering for the young silverbacks. Archibald said capturing the interaction was “so touching that I felt everybody with any brains would be touched.”


Title: WSJ: Are we really getting smarter?
Post by: Crafty_Dog on September 22, 2012, 07:02:02 AM


Are We Really Getting Smarter?
Americans' IQ scores have risen steadily over the past century. James R. Flynn examines why.
By JAMES R. FLYNN

IQ tests aren't perfect, but they can be useful. If a boy doing badly in class does really well on one, it is worth investigating whether he is being bullied at school or having problems at home. The tests also roughly predict who will succeed at college, though factors like motivation and self-control are at least as important.

 

We are the first of our species to live in a world dominated by hypotheticals and nonverbal symbols.

Advanced nations like the U.S. have experienced massive IQ gains over time (a phenomenon that I first noted in a 1984 study and is now known as the "Flynn Effect"). From the early 1900s to today, Americans have gained three IQ points per decade on both the Stanford-Binet Intelligence Scales and the Wechsler Intelligence Scales. These tests have been around since the early 20th century in some form, though they have been updated over time. Another test, Raven's Progressive Matrices, was invented in 1938, but there are scores for people whose birth dates go back to 1872. It shows gains of five points per decade.

In 1910, scored against today's norms, our ancestors would have had an average IQ of 70 (or 50 if we tested with Raven's). By comparison, our mean IQ today is 130 to 150, depending on the test. Are we geniuses or were they just dense?

These alternatives sparked a wave of skepticism about IQ. How could we claim that the tests were valid when they implied such nonsense? Our ancestors weren't dumb compared with us, of course. They had the same practical intelligence and ability to deal with the everyday world that we do. Where we differ from them is more fundamental: Rising IQ scores show how the modern world, particularly education, has changed the human mind itself and set us apart from our ancestors. They lived in a much simpler world, and most had no formal schooling beyond the sixth grade.

The Raven's test uses images to convey logical relationships. The Wechsler has 10 subtests, some of which do much the same, while others measure traits that intelligent people are likely to pick up over time, such as a large vocabulary and the ability to classify objects.

Modern people do so well on these tests because we are new and peculiar. We are the first of our species to live in a world dominated by categories, hypotheticals, nonverbal symbols and visual images that paint alternative realities. We have evolved to deal with a world that would have been alien to previous generations.

More
Raven's Progressive Matrices are non-verbal multiple choice measures of the general intelligence. In each test item, the subject is asked to identify the missing element that completes a pattern. Click here to take the test.
.
A century ago, people mostly used their minds to manipulate the concrete world for advantage. They wore what I call "utilitarian spectacles." Our minds now tend toward logical analysis of abstract symbols—what I call "scientific spectacles." Today we tend to classify things rather than to be obsessed with their differences. We take the hypothetical seriously and easily discern symbolic relationships.

The mind-set of the past can be seen in interviews between the great psychologist Alexander Luria and residents of rural Russia during the 1920s—people who, like ourselves in 1910, had little formal education.

Luria: What do a fish and crow have in common?

Reply: A fish it lives in water, a crow flies.

Luria: Could you use one word for them both?

Reply: If you called them "animals" that wouldn't be right. A fish isn't an animal, and a crow isn't either. A person can eat a fish but not a crow.

The prescientific person is fixated on differences between things that give them different uses. My father was born in 1885. If you asked him what dogs and rabbits had in common, he would have said, "You use dogs to hunt rabbits." Today a schoolboy would say, "They are both mammals." The latter is the right answer on an IQ test. Today we find it quite natural to classify the world as a prerequisite to understanding it.

Here is another example.

Luria: There are no camels in Germany; the city of B is in Germany; are there camels there or not?

Reply: I don't know, I have never seen German villages. If B is a large city, there should be camels there.

Luria: But what if there aren't any in all of Germany?

Reply: If B is a village, there is probably no room for camels.

The prescientific Russian wasn't about to treat something as important as the existence of camels hypothetically. Resistance to the hypothetical isn't just a state of mind unfriendly to IQ tests. Moral argument is more mature today than a century ago because we take the hypothetical seriously: We can imagine alternate scenarios and put ourselves in the shoes of others.

The following invented examples (not from an IQ test) show how our minds have evolved. All three present a series that implies a relationship; you must discern that relationship and complete the series based on multiple-choice answers:

1. [gun] / [gun] / [bullet] 2. [bow] / [bow] / [blank].

Pictures that represent concrete objects convey the relationship. In 1910, the average person could choose "arrow" as the answer.

1. [square] / [square] / [triangle]. 2. [circle] / [circle] / [blank].

In this question, the relationship is conveyed by shapes, not concrete objects. By 1960, many could choose semicircle as the answer: Just as the square is halved into a triangle, so the circle should be halved.

1. * / & / ? 2. M / B / [blank].

In this question, the relationship is simply that the symbols have nothing in common except that they are the same kind of symbol. That "relationship" transcends the literal appearance of the symbols themselves. By 2010, many could choose "any letter other than M or B" from the list as the answer.

This progression signals a growing ability to cope with formal education, not just in algebra but also in the humanities. Consider the exam questions that schools posed to 14-year-olds in 1910 and 1990. The earlier exams were all about socially valuable information: What were the capitals of the 45 states? Later tests were all about relationships: Why is the capital of many states not the largest city? Rural-dominated state legislatures hated big cities and chose Albany over New York, Harrisburg over Philadelphia, and so forth.

Our lives are utterly different from those led by most Americans before 1910. The average American went to school for less than six years and then worked long hours in factories, shops or agriculture. The only artificial images they saw were drawings or photographs. Aside from basic arithmetic, nonverbal symbols were restricted to musical notation (for an elite) and playing cards. Their minds were focused on ownership, the useful, the beneficial and the harmful.

Widespread secondary education has created a mass clientele for books, plays and the arts. Since 1950, there have been large gains on vocabulary and information subtests, at least for adults. More words mean that more concepts are conveyed. More information means that more connections are perceived. Better analysis of hypothetical situations means more innovation. As the modern mind developed, people performed better not only as technicians and scientists but also as administrators and executives.

A greater pool of those capable of understanding abstractions, more contact with people who enjoy playing with ideas, the enhancement of leisure—all of these developments have benefited society. And they have come about without upgrading the human brain genetically or physiologically. Our mental abilities have grown, simply enough, through a wider acquaintance with the world's possibilities.

—Mr. Flynn is the author of "Are We Getting Smarter? Rising IQ in the 21st Century" (Cambridge University Press).
Title: Five unique ways intelligent people screw up
Post by: Crafty_Dog on October 10, 2012, 12:02:44 PM


http://pjmedia.com/lifestyle/2012/09/29/the-5-unique-ways-intelligent-people-screw-up-their-lives/?singlepage=true
Title: Empathy and Analytical Mind mutually exclusive
Post by: Crafty_Dog on November 03, 2012, 08:06:13 PM
http://www.redorbit.com/news/science/1112722935/brain-empathy-analytical-thinking-103112/
Title: Jewish DNA
Post by: Crafty_Dog on June 20, 2013, 05:28:05 AM


http://fun.mivzakon.co.il/video/General/8740/%D7%9E%D7%97%D7%A7%D7%A8.html
Title: Smart Crow
Post by: Crafty_Dog on February 09, 2014, 03:23:26 AM
http://www.huffingtonpost.com/2014/02/06/crow-smartest-bird_n_4738171.html
Title: Elephant Artist
Post by: Crafty_Dog on August 07, 2014, 10:16:14 AM
http://www.igooglemo.com/2014/06/amazing-young-elephant-paints-elephant_15.html
Title: altruism
Post by: ccp on September 28, 2014, 05:09:41 PM
Extreme altruism
Right on!
Self-sacrifice, it seems, is the biological opposite of psychopathy
Sep 20th 2014 | From the print edition

FLYERS at petrol stations do not normally ask for someone to donate a kidney to an unrelated stranger. That such a poster, in a garage in Indiana, actually did persuade a donor to come forward might seem extraordinary. But extraordinary people such as the respondent to this appeal (those who volunteer to deliver aid by truck in Syria at the moment might also qualify) are sufficiently common to be worth investigating. And in a paper published this week in the Proceedings of the National Academy of Sciences, Abigail Marsh of Georgetown University and her colleagues do just that. Their conclusion is that extreme altruists are at one end of a “caring continuum” which exists in human populations—a continuum that has psychopaths at the other end.

Biology has long struggled with the concept of altruism. There is now reasonable agreement that its purpose is partly to be nice to relatives (with whom one shares genes) and partly to permit the exchanging of favours. But how the brain goes about being altruistic is unknown. Dr Marsh therefore wondered if the brains of extreme altruists might have observable differences from other brains—and, in particular, whether such differences might be the obverse of those seen in psychopaths.

She and her team used two brain-scanning techniques, structural and functional magnetic-resonance imaging (MRI), to study the amygdalas of 39 volunteers, 19 of whom were altruistic kidney donors. (The amygdalas, of which brains have two, one in each hemisphere, are areas of tissue central to the processing of emotion and empathy.) Structural MRI showed that the right amygdalas of altruists were 8.1% larger, on average, than those of people in the control group, though everyone’s left amygdalas were about the same size. That is, indeed, the obverse of what pertains in psychopaths, whose right amygdalas, previous studies have shown, are smaller than those of controls.

Functional MRI yielded similar results. Participants, while lying in a scanner, were shown pictures of men and women wearing fearful, angry or neutral expressions on their faces. Each volunteer went through four consecutive runs of 80 such images, and the fearful images (but not the other sorts) produced much more activity in the right amygdalas of the altruists than they did in those of the control groups, while the left amygdalas showed no such response. That, again, is the obverse of what previous work has shown is true of psychopaths, though in neither case is it clear why only the right amygdala is affected.

Dr Marsh’s result is interesting as much for what it says about psychopathy as for what it says about extreme altruism. Some biologists regard psychopathy as adaptive. They argue that if a psychopath can bully non-psychopaths into giving him what he wants, he will be at a reproductive advantage as long as most of the population is not psychopathic. The genes underpinning psychopathy will thus persist, though they can never become ubiquitous because psychopathy works only when there are non-psychopaths to prey on.

In contrast, Dr Marsh’s work suggests that what is going on is more like the way human height varies. Being tall is not a specific adaptation (though lots of research suggests tall people do better, in many ways, than short people do). Rather, tall people (and also short people) are outliers caused by unusual combinations of the many genes that govern height. If Dr Marsh is correct, psychopaths and extreme altruists may be the result of similar, rare combinations of genes underpinning the more normal human propensity to be moderately altruistic.

From the print edition: Science and technology
Title: Worm mind, robot body
Post by: Crafty_Dog on December 15, 2014, 04:41:26 PM
http://www.iflscience.com/technology/worms-mind-robot-body 
Title: The Artificial Intelligence Revolution
Post by: Crafty_Dog on February 23, 2015, 07:05:06 PM


http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

The URL for Part Two can be found in Part One.
Title: Ashkenazi intelligence
Post by: Crafty_Dog on February 24, 2015, 12:07:16 PM
Haven't read this yet, posting it here for my future convenience:

http://ieet.org/index.php/IEET/more/pellissier20130620 
Title: Autistic Boy with IQ of 170
Post by: Crafty_Dog on May 27, 2015, 05:44:32 PM
http://wakeup-world.com/2013/06/04/autistic-boy-discovers-gift-after-removal-from-state-run-therapy/
Title: Is it ethical to study the genetic component of IQ?
Post by: Crafty_Dog on October 03, 2015, 01:53:45 PM
http://www.bioedge.org/bioethics/is-it-ethical-to-investigate-the-genetic-component-of-iq/11594
Title: Artificial General Intelligence AGI about to up end the world as we know it
Post by: Crafty_Dog on March 12, 2016, 10:35:13 PM
Game ON: the end of the old economic system is in sight
Posted: 12 Mar 2016 11:23 AM PST
Google is a pioneer in limited artificial general intelligence (aka computers that can learn w/o preprogramming them). One successful example is AlphaGo.  It just beat this Go Grandmaster three times in a row. 
 
 
What makes this win interesting is that AlphaGo didn't win through brute force.  Go is too complicated for that:
...the average 150-move game contains more possible board configurations — 10170 — than there are atoms in the Universe, so it can’t be solved by algorithms that search exhaustively for the best move.
 
It also didn't win by extensive preprogramming by talented engineers, like IBM's Deep Blue did to win at Chess. 
 
Instead, AlphaGo won this victory by learning how to play the game from scratch using this process:
   No assumptions.  AlphaGo approached the game without any assumptions.  This is called a model-free approach.  This allows it to program itself from scratch, by building complex models human programmers can't understand/match.
   Big Data.  It then learned the game by interacting with a database filled with 30 million games previously played by human beings.  The ability to bootstrap a model from data removes almost all of the need for engineering and programming talent currently needed for big systems.  That's huge.
   Big Sim (by the way, Big Sim will be as well known as Big Data in five years <-- heard it here first). Finally, it applied and honed that learning by playing itself on 50 computers night and day until it became good enough to play a human grandmaster.
The surprise of this victory isn't that it occurred.  Most expected it would, eventually... 
 
Instead, the surprise is how fast it happened.  How fast AlphaGo was able to bootstrap itself to a mastery of the game.  It was fast. Unreasonably fast.
 
However, this victory goes way beyond the game of Go.  It is important because AlphaGo uses a generic technique for learning.  A technique that can be used to master a HUGE range of activities, quickly.  Activities that people get paid for today.
 
This implies the following:
   This technology is going to cut through the global economy like a hot knife through butter.  It learns fast and largely on its own.  It's widely applicable.  It doesn't only master what it has seen, it can innovate.  For example: some of the unheard of moves made by AlphaGo were considered "beautiful" by the Grandmaster it beat. 
   Limited AGI (deep learning in particular) will have the ability to do nearly any job currently being done by human beings -- from lawyers to judges, nurses to doctors, driving to construction -- potentially at a grandmaster's level of capability.  This makes it a buzzsaw.
   Very few people (and I mean very few) will be able to stay ahead of the limited AGI buzzsaw.   It learns so quickly, the fate of people stranded in former factory towns gutted by "free trade" is likely to be the fate of the highest paid technorati.  They simply don't have the capacity to learn fast enough or be creative enough to stay ahead of it.
Have fun,
 
John Robb
 
PS:  Isn't it ironic (or not) that at the very moment in history when we demonstrate a limited AGI (potentially, a tsunami of technological change) the western industrial bureaucratic political system starts to implode due to an inability to deal with the globalization (economic, finance and communications) enabled by the last wave of technological change?

PPS:  This has huge implications for warfare.  I'll write more about those soon.  Laying a foundation for understanding this change first.
Title: Inherited Intelligence, Charles Murray, Bell Curve
Post by: DougMacG on March 23, 2016, 10:02:07 AM
Also pertains to race and education. 
Charles Murray was co-author of The Bell Curve, a very long scientific book that became a landmine for a small point in it that exposed differences in intelligence between races, therefore author is a racist...  His co-author died about when this was published so he has owned the work over the two decades since it was published.  

Intelligence is 40%-80% inherited, a wide range that is nowhere near zero or 100%.

People tend to marry near their own intelligence making the difference grow rather than equalize over time.  He predicted this would have societal effects that have most certainly become true.

Being called a racist for publishing scientific data is nothing new, but Charles Murray has received more than his share of it.  What he could of or should have done is cover up the real results to fit what people like to hear, like the climate scientists do.  He didn't.

Most recently his work received a public rebuke from the President of Virginia Tech.

His response to that is a bit long but quite a worthwhile read that will save you the time of reading his 3-4 inch thick hardcover book if you haven't already read this important work.

https://www.aei.org/publication/an-open-letter-to-the-virginia-tech-community/

Charles Murray
March 17, 2016 9:00 am

An open letter to the Virginia Tech community

Last week, the president of Virginia Tech, Tim Sands, published an “open letter to the Virginia Tech community” defending lectures delivered by deplorable people like me (I’m speaking on the themes of Coming Apart on March 25). Bravo for President Sands’s defense of intellectual freedom. But I confess that I was not entirely satisfied with his characterization of my work. So I’m writing an open letter of my own.

Dear Virginia Tech community,

Since President Sands has just published an open letter making a serious allegation against me, it seems appropriate to respond. The allegation: “Dr. Murray is well known for his controversial and largely discredited work linking measures of intelligence to heredity, and specifically to race and ethnicity — a flawed socioeconomic theory that has been used by some to justify fascism, racism and eugenics.”

Let me make an allegation of my own. President Sands is unfamiliar either with the actual content of The Bell Curve — the book I wrote with Richard J. Herrnstein to which he alludes — or with the state of knowledge in psychometrics.

The Bell Curve and Charles Murray
I should begin by pointing out that the topic of the The Bell Curve was not race, but, as the book’s subtitle says, “Intelligence and Class Structure in American Life.” Our thesis was that over the last half of the 20th century, American society has become cognitively stratified. At the beginning of the penultimate chapter, Herrnstein and I summarized our message:

Predicting the course of society is chancy, but certain tendencies seem strong enough to worry about:
An increasingly isolated cognitive elite.
A merging of the cognitive elite with the affluent.
A deteriorating quality of life for people at the bottom end of the cognitive distribution.
Unchecked, these trends will lead the U.S. toward something resembling a caste society, with the underclass mired ever more firmly at the bottom and the cognitive elite ever more firmly anchored at the top, restructuring the rules of society so that it becomes harder and harder for them to lose. [p. 509].
It is obvious that these conclusions have not been discredited in the twenty-two years since they were written. They may be more accurately described as prescient.

Now to the substance of President Sands’s allegation.

The heritability of intelligence

Richard Herrnstein and I wrote that cognitive ability as measured by IQ tests is heritable, somewhere in the range of 40% to 80% [pp. 105–110], and that heritability tends to rise as people get older. This was not a scientifically controversial statement when we wrote it; that President Sands thinks it has been discredited as of 2016 is amazing.

You needn’t take my word for it. In the wake of the uproar over The Bell Curve, the American Psychological Association (APA) assembled a Task Force on Intelligence consisting of eleven of the most distinguished psychometricians in the United States. Their report, titled “Intelligence: Knowns and Unknowns,” was published in the February 1996 issue of the APA’s peer-reviewed journal, American Psychologist. Regarding the magnitude of heritability (represented by h2), here is the Task Force’s relevant paragraph. For purposes of readability, I have omitted the citations embedded in the original paragraph:

If one simply combines all available correlations in a single analysis, the heritability (h2) works out to about .50 and the between-family variance (c2) to about .25. These overall figures are misleading, however, because most of the relevant studies have been done with children. We now know that the heritability of IQ changes with age: h2 goes up and c2 goes down from infancy to adulthood. In childhood h2 and c2 for IQ are of the order of .45 and .35; by late adolescence h2 is around .75 and c2 is quite low (zero in some studies) [p. 85].
The position we took on heritability was squarely within the consensus state of knowledge. Since The Bell Curve was published, the range of estimates has narrowed somewhat, tending toward modestly higher estimates of heritability.

Intelligence and race

There’s no doubt that discussing intelligence and race was asking for trouble in 1994, as it still is in 2016. But that’s for political reasons, not scientific ones.

There’s no doubt that discussing intelligence and race was asking for trouble in 1994, as it still is in 2016. But that’s for political reasons, not scientific ones. Once again, the state of knowledge about the basics is not particularly controversial. The mean scores for all kinds of mental tests vary by ethnicity. No one familiar with the data disputes that most elemental statement. Regarding the most sensitive difference, between Blacks and Whites, Herrnstein and I followed the usual estimate of one standard deviation (15 IQ points), but pointed out that the magnitude varied depending on the test, sample, and where and how it was administered. What did the APA Task Force conclude? “Although studies using different tests and samples yield a range of results, the Black mean is typically about one standard deviation (about 15 points) below that of Whites. The difference is largest on those tests (verbal or nonverbal) that best represent the general intelligence factor g” [p. 93].

Is the Black/White differential diminishing? In The Bell Curve, we discussed at length the evidence that the Black/White differential has narrowed [pp. 289–295], concluding that “The answer is yes with (as usual) some qualifications.” The Task Force’s treatment of the question paralleled ours, concluding with “[l]arger and more definitive studies are needed before this trend can be regarded as established” [p. 93].

Can the Black/White differential be explained by test bias? In a long discussion [pp. 280–286], Herrnstein and I presented the massive evidence that the predictive validity of mental tests is similar for Blacks and Whites and that cultural bias in the test items or their administration do not explain the Black/White differential. The Task Force’s conclusions regarding predictive validity: “Considered as predictors of future performance, the tests do not seem to be biased against African Americans” [p. 93]. Regarding cultural bias and testing conditions:  “Controlled studies [of these potential sources of bias] have shown, however, that none of them contributes substantially to the Black/White differential under discussion here” [p. 94].

Can the Black/White differential be explained by socioeconomic status? We pointed out that the question has two answers: Statistically controlling for socioeconomic status (SES) narrows the gap. But the gap does not narrow as SES goes up — i.e., measured in standard deviations, the differential between Blacks and Whites with high SES is not narrower than the differential between those with low SES [pp. 286–289]. Here’s the APA Task Force on this topic:

Several considerations suggest that [SES] cannot be the whole explanation. For one thing, the Black/White differential in test scores is not eliminated when groups or individuals are matched for SES. Moreover, the data reviewed in Section 4 suggest that—if we exclude extreme conditions—nutrition and other biological factors that may vary with SES account for relatively little of the variance in such scores [p. 94].
The notion that Herrnstein and I made claims about ethnic differences in IQ that have been scientifically rejected is simply wrong.

And so on. The notion that Herrnstein and I made claims about ethnic differences in IQ that have been scientifically rejected is simply wrong. We deliberately remained well within the mainstream of what was confidently known when we wrote. None of those descriptions have changed much in the subsequent twenty-two years, except to be reinforced as more has been learned. I have no idea what countervailing evidence President Sands could have in mind.

At this point, some readers may be saying to themselves, “But wasn’t The Bell Curve the book that tried to prove blacks were genetically inferior to whites?” I gather that was President Sands’ impression as well. It has no basis in fact. Knowing that people are preoccupied with genes and race (it was always the first topic that came up when we told people we were writing a book about IQ), Herrnstein and I offered a seventeen-page discussion of genes, race, and IQ [pp. 295–311]. The first five pages were devoted to explaining the context of the issue — why, for example, the heritability of IQ among humans does not necessarily mean that differences between groups are also heritable. Four pages were devoted to the technical literature arguing that genes were implicated in the Black/White differential. Eight pages were devoted to arguments that the causes were environmental. Then we wrote:

If the reader is now convinced that either the genetic or environmental explanation has won out to the exclusion of the other, we have not done a sufficiently good job of presenting one side or the other. It seems highly likely to us that both genes and the environment have something to do with racial differences. What might the mix be? We are resolutely agnostic on that issue; as far as we can determine, the evidence does not yet justify an estimate. [p. 311].
That’s it—the sum total of every wild-eyed claim that The Bell Curve makes about genes and race. There’s nothing else. Herrnstein and I were guilty of refusing to say that the evidence justified a conclusion that the differential had to be entirely environmental. On this issue, I have a minor quibble with the APA Task Force, which wrote “There is not much direct evidence on [a genetic component], but what little there is fails to support the genetic hypothesis” [p. 95]. Actually there was no direct evidence at all as of the mid-1990s, but the Task Force chose not to mention a considerable body of indirect evidence that did in fact support the genetic hypothesis. No matter. The Task Force did not reject the possibility of a genetic component. As of 2016, geneticists are within a few years of knowing the answer for sure, and I am content to wait for their findings.

But I cannot leave the issue of genes without mentioning how strongly Herrnstein and I rejected the importance of whether genes are involved. This passage from The Bell Curve reveals how very, very different the book is from the characterization of it that has become so widespread:

In sum: If tomorrow you knew beyond a shadow of a doubt that all the cognitive differences between races were 100 percent genetic in origin, nothing of any significance should change. The knowledge would give you no reason to treat individuals differently than if ethnic differences were 100 percent environmental. By the same token, knowing that the differences are 100 percent environmental in origin would not suggest a single program or policy that is not already being tried. It would justify no optimism about the time it will take to narrow the existing gaps. It would not even justify confidence that genetically based differences will not be upon us within a few generations. The impulse to think that environmental sources of difference are less threatening than genetic ones is natural but illusory.
In any case, you are not going to learn tomorrow that all the cognitive differences between races are 100 percent genetic in origin, because the scientific state of knowledge, unfinished as it is, already gives ample evidence that environment is part of the story. But the evidence eventually may become unequivocal that genes are also part of the story. We are worried that the elite wisdom on this issue, for years almost hysterically in denial about that possibility, will snap too far in the other direction. It is possible to face all the facts on ethnic and race differences on intelligence and not run screaming from the room. That is the essential message [pp. 314-315].
I have been reluctant to spend so much space discussing The Bell Curve’s treatment of race and intelligence because it was such an ancillary topic in the book. Focusing on it in this letter has probably made it sound as if it was as important as President Sands’s open letter implied.

But I had to do it. For two decades, I have had to put up with misrepresentations of The Bell Curve. It is annoying. After so long, when so many of the book’s main arguments have been so dramatically vindicated by events, and when our presentations of the meaning and role of IQ have been so steadily reinforced by subsequent research in the social sciences, not to mention developments in neuroscience and genetics, President Sands’s casual accusation that our work has been “largely discredited” was especially exasperating. The president of a distinguished university should take more care.

It is in that context that I came to the end of President Sands’s indictment, accusing me of promulgating “a flawed socioeconomic theory that has been used by some to justify fascism, racism and eugenics.” At that point, President Sands went beyond the kind of statement that merely reflects his unfamiliarity with The Bell Curve and/or psychometrics. He engaged in intellectual McCarthyism.
Title: Elon Musk vs. Artificial Intelligence
Post by: Crafty_Dog on March 29, 2017, 08:33:55 AM
http://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x
Title: Qualia
Post by: Crafty_Dog on April 13, 2017, 11:53:51 AM
http://neurohacker.com/qualia/


My son is intrigued by this.  Any comments?
Title: Re: Qualia
Post by: G M on April 20, 2017, 07:33:10 PM
http://neurohacker.com/qualia/


My son is intrigued by this.  Any comments?

My money is on this being a ripoff.
Title: CD, FWIW I agree with GM
Post by: ccp on April 22, 2017, 11:03:53 AM
https://www.theatlantic.com/health/archive/2013/07/the-vitamin-myth-why-we-think-we-need-supplements/277947/

Crafty,
most if not all the supplement sales carry a similar pattern of promotion.   You get someone with a science background who recites biochemical pathways that show a particular substance is involved in some sort of function that is needed for health.   From Vit C to B 12 to cofactor with magnesium and hundreds and probably thousands more

They impress the non scientist with "co factors" long chemical names and site studies that show some relationship to our health.   Then they may state taking large doses of the cofactor or other chemical increases the benefit to our health over just normal doses.  Or they will vary the presentation with claims that the nutrient or chemical has to be taken in a certain way with other substances and then, and lonly then we would all reap some increase benefit to our "prostate health, our cognitive health, our digestive healthy, more energy etc.

Then they find mostly obscure studies by usually second rate or no name researches who are spending grant money , trying to make some sort of name for themselves, or I even suspect are at times making up data for bribes  and then publish the date and their "research" in one of the money making journals that are usually second rate or are not well monitored or peer reviewed (even that process is subject to outright fraud).

So now they cite the impressive sounding biochemistry in order to sound like they understand something that the rest of us do not and they "discovered" this chemical (s) that is found by these usually insignificant if not fraudulent studies to suggest some sort of benefit.   

The chemicals are often obscure from some exotic jungle of far away ocean or island or with some claim of being the only ones who can provide in the proper purity or concentration or mix or other elixir that no one else can duplicate .

If any real scientist or doctor disputes their claim they come back with a vengeance arguing that the doctor or scientist is just threatened by this "cure" that would put the doctor or scientist out of business.

You don't have to take my word for it but the vast majority of these things , if not all , are scams.  They all have similar themes with variations that play over and over again to people who are looking to stay healthy, stay young, get an edge in life, have more sexual prowess ,  remember more , be smarte.

There si billions to be made. 

I hope I don't sound like some condescending doctor who thinks he knows it all.  I don't.  And I know I don't. 
But even on Shark Tank when some entreprenuer came on trying to get the sharks to buy into some sort of supplement they said all the supplements are just a "con".

FWIW I agree with it.



Title: Re: Intelligence and Psychology
Post by: Crafty_Dog on April 22, 2017, 02:19:56 PM
Thank you!
Title: Jordan Peterson on Intelligence
Post by: Crafty_Dog on May 25, 2017, 12:09:03 PM
https://www.youtube.com/watch?v=P8opBj1LjSU
Title: AI develops its own language
Post by: Crafty_Dog on June 16, 2017, 04:06:11 PM


https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/
Title: Re: AI develops its own language
Post by: G M on June 17, 2017, 10:10:27 AM


https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/

https://www.youtube.com/watch?v=ih_l0vBISOE
Title: Grit
Post by: Crafty_Dog on October 09, 2017, 08:31:50 AM
http://www.illumeably.com/2017/07/27/success-vs-failure/
Title: Mark Cuban on AI
Post by: Crafty_Dog on November 06, 2017, 09:02:58 AM
https://www.marketwatch.com/story/mark-cuban-tells-kyle-bass-ai-will-change-everything-weighs-in-on-bitcoin-2017-11-04?mod=cx_picks&cx_navSource=cx_picks&cx_tag=other&cx_artPos=7#cxrecs_s

There is no way to beat the machines, so you’d better bone up on what makes them tick.

That was the advice of billionaire investor Mark Cuban, who was interviewed by Kyle Bass, founder and principal of Hayman Capital Management, in late October for Real Vision. The interview published Nov. 3.

“I think artificial intelligence is going to change everything, everything, 180 degrees,” said Cuban, who added that changes seen by AI would “dwarf” the advances that have been seen in technology over the last 30 years or more, even the internet.

The owner of the NBA’s Dallas Mavericks and a regular on the TV show “Shark Tank,” said AI is going to displace a lot of jobs, something that will play out fast, over the next 20 to 30 years. He said real estate is one industry that’s likely to get hit hard.

Read: Kyle Bass says this will be the first sign of a bigger market meltdown

“So, the concept of you calling in to make an appointment to have somebody pick up your car to get your oil changed, right — someone will still drive to get your car, but there’s going to be no people in transacting any of it,” he said.

Cuban says he’s trying to learn all he can right now about machine-learning, neural networks, deep learning, writing code and programming language such as Python. Machine language say, “OK, we can take a lot more variables than you can ever think of,” he said.

And AI is seeing big demand when it comes to jobs, he said. “At the high end, we can’t pay enough to get — so when I talk about, within the artificial intelligence realm, there’s a company in China paying million-dollar bonuses to get the best graduates,” said Cuban.

The U.S. is falling badly behind when it comes to AI, with Montreal now the “center of the universe for computer vision. It’s not U.S.-based schools that are dominating any longer in those areas,” he said.

The AI companies

As for companies standing to benefit from AI, Cuban said he “thinks the FANG stocks are going to crush them,” noting that his biggest public holding is Amazon.com Inc. AMZN, +0.83%

“They’re the world’s greatest startups with liquidity. If you look at them as just a public company where you want to see what the P/E ratio is and what the discounted cash value-- you’re never going to get it, right? You’re never going to see it. And if you say Jeff Bezos (chief executive officer of Amazon), Reed Hastings (chief executive officer of Netflix Inc. NFLX, +0.65% ) — those are my 2 biggest holdings,” he said.

Read: 10 wildly successful people and the surprising jobs that kick-started their careers

Cuban said he’s less sold on Apple Inc. AAPL, +1.26% though he said it’s trying to make progress on AI, along with Alphabet Inc. GOOGL, -0.33% and Facebook Inc. FB, +0.15%  . “They’re just nonstop startups. They’re in a war. And you can see the market value accumulating to them because of that,” he said.

But still, they aren’t all owning AI yet, and there’s lots of opportunities for smaller companies, he added..

On digital currencies and ICO

While Bass commented that he has been just a spectator when it comes to blockchain — a decentralized ledger used to record and verify transactions — Cuban said he’s a big fan. But when it comes to bitcoin BTCUSD, -2.74% ethereum and other cryptocurrencies, he said it would be a struggle to see them become real currencies because only a limited number of transactions can be done.

Read: Two ETF sponsors file for funds related to blockchain, bitcoin’s foundational technology

“So, it’s going to be very difficult for it to be a currency when the time and the expense of doing a transaction is 100 times what you can do over a Visa or Mastercard, right?” asked Cuban, adding that really the only value of bitcoin and ethereum is that they are just digital assets that are collectible.

Read: Bitcoin may be staging the biggest challenge yet to gold and silver

“And in this particular case, it’s a brilliant collectible that’s probably more like art than baseball cards, stamps, or coins, right, because there’s a finite amount that are going to be made, right? There are 21.9 million bitcoins that are going to be made,” he said.

Cuban said initial coin offerings — fundraising for new cryptocurrency ventures — “really are an opportunity,” and he has been in involved in UniCoin, which does ETrade and Unikrm, which does legal sports betting for Esports and other sports outside the United States.

Read: What is an ICO?

But he and Bass both commented about how the industry needs regulating, with Bass noting that ICOs have raised $3 billion this year, and $2 billion going into September. While many are “actually going to do well,” so many are just completely stupid and frauds,” he said.

“It’s the dumb ones that are going to get shut down,” agreed Cuban.

One problem: “There’s nobody at the top that has any understanding of it,” he added, referring to the Securities and Exchange Commission.

Cuban ended the interview with some advice on where to invest now. He said for those investors not too knowledgeable about markets, the best bet is a cheap S&P 500 SPX, +0.06%  fund, but that putting 5% in bitcoin or ethereum isn’t a bad idea on the theory that it’s like investing in artwork.

Listen to the whole interview on Real Vision here
Title: The Psychology of Human Misjudgement
Post by: Crafty_Dog on March 13, 2018, 06:03:50 AM
https://www.youtube.com/watch?v=pqzcCfUglws&feature=youtu.be
Title: Intelligence across the generations
Post by: Crafty_Dog on March 15, 2018, 10:54:38 PM
https://ourworldindata.org/intelligence
Title: The Intelligence of Crows
Post by: Crafty_Dog on April 15, 2018, 09:46:06 PM
https://www.ted.com/talks/joshua_klein_on_the_intelligence_of_crows#t-178765
Title: Chinese Eugenics
Post by: Crafty_Dog on May 08, 2018, 02:59:20 PM
https://www.vice.com/en_us/article/5gw8vn/chinas-taking-over-the-world-with-a-massive-genetic-engineering-program
Title: Nautilus: Is AI inscrutable?
Post by: Crafty_Dog on June 17, 2018, 07:13:02 AM

http://nautil.us//issue/40/learning/is-artificial-intelligence-permanently-inscrutable?utm_source=Nautilus&utm_campaign=270e193d5c-EMAIL_CAMPAIGN_2018_06_15_08_18&utm_medium=email&utm_term=0_dc96ec7a9d-270e193d5c-61805061
Title: Evidence that Viruses May Cause Alzheimer's Disease
Post by: bigdog on July 15, 2018, 07:20:51 AM
https://gizmodo.com/yet-more-evidence-that-viruses-may-cause-alzheimers-dis-1827511539
Title: Brain Games for older folks
Post by: Crafty_Dog on July 26, 2018, 01:25:51 PM


https://www.nj.com/healthfit/index.ssf/2018/07/brain_training_breakthrough_offers_new_hope_in_bat.html
Title: The aurhtoritarian Chinese vision for AI
Post by: Crafty_Dog on August 10, 2018, 05:38:18 PM
https://www.nationalreview.com/2018/08/china-artificial-intelligence-race/?utm_source=Sailthru&utm_medium=email&utm_campaign=NR%20Daily%20Monday%20through%20Friday%202018-08-10&utm_term=NR5PM%20Actives
Title: China exporting illiberal AI
Post by: Crafty_Dog on August 15, 2018, 01:53:24 PM
https://www.mercatornet.com/features/view/exporting-enslavement-chinas-illiberal-artificial-intelligence/21607
Title: Human Brain builds structures in 11 dimensions; more
Post by: Crafty_Dog on August 19, 2018, 07:46:48 AM

https://bigthink.com/paul-ratner/our-brains-think-in-11-dimensions-discover-scientists?utm_campaign=Echobox&utm_medium=Social&utm_source=Twitter#Echobox=1534419641

Also see

https://aeon.co/videos/our-divided-brains-are-far-more-complex-and-remarkable-than-a-left-right-split

http://runwonder.com/life/science-explains-what-happens.html

Title: Stratfor: AI and great power competition
Post by: Crafty_Dog on October 18, 2018, 09:52:05 AM
Highlights

    Aging demographics and an emerging great power competition pitting China against the United States form the backdrop to a high-stakes race in artificial intelligence development.
    The United States, for now, has a lead overall in AI development, but China is moving aggressively to try and overtake its American rivals by 2030.
    While deep integration across tech supply chains and markets has occurred in the past couple of decades, rising economic nationalism and a growing battle over international standards will balkanize the global tech sector.
    AI advancements will boost productivity and economic growth, but creative destruction in the workforce will drive political angst in much of the world, putting China's digital authoritarianism model as well as liberal democracies to the test.

For better or worse, the advancement and diffusion of artificial intelligence technology will come to define this century. Whether that statement should fill your soul with terror or delight remains a matter of intense debate. Techno-idealists and doomsdayers will paint their respective utopian and dystopian visions of machine-kind, making the leap from what we know now as "narrow AI" to "general AI" to surpass human cognition within our lifetime. On the opposite end of the spectrum, yawning skeptics will point to Siri's slow intellect and the human instinct of Capt. Chesley "Sully" Sullenberger – the pilot of the US Airways flight that successfully landed on the Hudson River in 2009 – to wave off AI chatter as a heap of hype not worth losing sleep over.

The fact is that the development of AI – a catch-all term that encompasses neural networks and machine learning and deep learning technologies – has the potential to fundamentally transform civilian and military life in the coming decades. Regardless of whether you're a businessperson pondering your next investment, an entrepreneur eyeing an emerging opportunity, a policymaker grappling with regulation or simply a citizen operating in an increasingly tech-driven society, AI is a global force that demands your attention.

The Big Picture

As Stratfor wrote in its 2018 Third-Quarter Forecast, the world is muddling through a blurry transition from the post-Cold War world to an emerging era of great power competition. The race to dominate AI development will be a defining feature of U.S.-China rivalry.

An Unstoppable Force

Willingly or not, even the deepest skeptics are feeding the AI force nearly every minute of every day. Every Google (or Baidu) search, Twitter (or Weibo) post, Facebook (or Tencent) ad and Amazon (or Alibaba) purchase is another click creating mountains of data – some 2.2. billion gigabytes globally every day – that companies are using to train their algorithms to anticipate and mimic human behavior. This creates a virtuous (or vicious, depending on your perspective) cycle: the more users engage with everyday technology platforms, the more data is collected; the more data that's collected, the more the product improves; the more competitive the product, the more users and billions of dollars in investment it will attract; a growing number of users means more data can be collected, and the loop continues.

And unlike previous AI busts, the development of this technology is occurring amid rapidly advancing computing power, where the use of graphical processing units (GPUs) and development of custom computer chips is giving AI developers increasingly potent hardware to drive up efficiency and drive down cost in training their algorithms. To help fuel advancements in AI hardware and software, AI investment is also growing at a rapid pace.

The Geopolitical Backdrop to the Global AI Race

AI is both a driver and a consequence of structural forces reshaping the global order. Aging demographics – an unprecedented and largely irreversible global phenomenon – is a catalyst for AI development. As populations age and shrink, financial burdens on the state mount and labor productivity slows, sapping economic growth over time. Advanced industrial economies already struggling to cope with the ill effects of aging demographics with governments that are politically squeamish toward immigration will relentlessly look to machine learning technologies to increase productivity and economic growth in the face of growing labor constraints.

The global race for AI supremacy will feature prominently in a budding great power competition between the United States and China. China was shocked in 2016 when Google DeepMind's AlphaGo beat the world champion of Go, an ancient Chinese strategy game (Chinese AI state planners dubbed the event their "Sputnik moment"), and has been deeply shaken by U.S. President Donald Trump's trade wars and the West's growing imperative to keep sensitive technology out of Chinese competitors' hands. Just in the past couple of years alone, China's state focus on AI development has skyrocketed to ensure its technological drive won't suffer a short circuit due to its competition with the United States.

How the U.S. and China Stack Up in AI Development

Do or Die for Beijing

The United States, for now, has the lead in AI development when it comes to hardware, research and development, and a dynamic commercial AI sector. China, by the sheer size of its population, has a much larger data pool, but is critically lagging behind the United States in semiconductor development. Beijing, however, is not lacking in motivation in its bid to overtake the United States as the premier global AI leader by 2030. And while that timeline may appear aggressive, China's ambitious development in AI in the coming years will be unfettered by the growing ethical, privacy and antitrust concerns occupying the West. China is also throwing hundreds of billions of dollars into fulfilling its AI mission, both in collaboration with its standing tech champions and by encouraging the rise of unicorns, or privately held startups valued at $1 billion or more.

By incubating and rewarding more and more startups, Beijing is finding a balance between focusing its national champions on the technologies most critical to the state (sometimes by taking an equity stake in the company) without stifling innovation. In the United States, on the other hand, it would be disingenuous to label U.S.-based multinational firms, which park most of their corporate profits overseas, as true "national" champions. Instead of the state taking the lead in funding high-risk and big-impact research in emerging technologies as it has in the past, the roles in the West have been flipped; private tech companies are in the driver's seat while the state is lunging at the steering wheel, trying desperately to keep China in its rear view.

The Ideological Battleground

The United States may have thought its days of fighting globe-spanning ideological battles ended with the Cold War. Not so. AI development is spawning a new ideological battlefield between the United States and China, pitting the West's notion of liberal democracy against China's emerging brand of digital authoritarianism. As neuroscientist Nicholas Wright highlights in his article, "How Artificial Intelligence Will Reshape the Global Order," China's 2017 AI development plan "describes how the ability to predict and grasp group cognition means 'AI brings new opportunities for social construction.'" Central to this strategic initiative is China's diffusion of a "social credit system" (which is set to be fully operational by 2020) that would assign a score based on a citizen's daily activities to determine everything from airfare class and loan eligibility to what schools your kids are allowed to attend. It's a tech-powered, state-driven approach to parse model citizens from the deplorables, so to speak.

The ability to harness AI-powered facial recognition and surveillance data to shape social behavior is an appealing tool, not just for Beijing, but for other politically paranoid states that are hungry for an alternative path to stability and are underwhelmed by the West's messy track record in promoting democracy. Wright describes how Beijing has exported its Great Firewall model to Thailand and Vietnam to barricade the internet while also supplying surveillance technology to the likes of Iran, Russia, Ethiopia, Zimbabwe, Zambia and Malaysia. Not only does this aid China's goal of providing an alternative to a U.S.-led global order, but it widens China's access to even wider data pools around the globe to hone its own technological prowess.

The European Hustle

Not wanting to be left behind in this AI great power race, Europe and Russia are hustling to catch up, but they will struggle in the end to keep pace with the United States and China. Russian President Vladimir Putin made headlines last year when he told an audience of Russian youths that whoever rules AI will rule the world. But the reality of Russia's capital constraints means Russia will have to choose carefully where it puts its rubles. Moscow will apply a heavy focus on AI military applications and will rely on cyber espionage and theft to try and find shortcuts to AI development, all while trying to maintain its strategic alignment with China to challenge the United States.

The EU Struggle to Create Unicorn Companies

While France harbors ambitious plans to develop an AI ecosystem for Europe and Germany frets over losing its industrial edge to U.S. and Chinese tech competitors, unavoidable and growing fractures within the European Union will hamper Europe's ability to play a leading AI role on the world stage. The European Union's cumbersome regulatory environment and fragmented digital market has been prohibitive for tech startups, a fact reflected in the European Union's low global share and value of unicorn companies. Meanwhile, the United Kingdom, home to Europe's largest pool of tech talent, will be keen on unshackling itself from the European Union's investment-inhibitive regulations as it stumbles out of the bloc.

A Battle over Talent and Standards

But wherever pockets of tech innovation already exist on the Continent, those relatively few companies and individuals are already prime targets for U.S. and Chinese tech juggernauts prowling the globe for AI talent. AI experts are a precious global commodity. According to a 2018 study by Element AI, there are roughly 22,000 doctorate-level researchers in the world, but only around 3,000 are actually looking for work and around 5,400 are presenting their research at AI conferences. U.S. and Chinese tech giants are using a variety of means – mergers and acquisitions, aggressive poaching, launchings labs in cities like Paris, Montreal and Taiwan – to gobble up this tiny talent pool.

Largest Tech Companies by Market Capitalization

Even as Europe struggles to build up its own tech champions, the European Union can use its market size and conscientious approach to ethics, privacy and competition to push back on encroaching tech giants through hefty fines, data localization and privacy rules, taxation and investment restrictions. The bloc's rollout of its General Data Protection Regulation (GDPR) is designed to give Europeans more control over their personal data by limiting data storage times, deleting data on request and monitoring for data breaches. While big-tech firms have the means to adapt and pay fines, the move threatens to cripple smaller firms struggling to comply with the high cost of compliance. It also fundamentally restricts the continental data flows needed to fuel Europe's AI startup culture.

The United States in many ways shares Europe's concerns over issues like data privacy and competition, but it has a fundamentally different approach in how to manage those concerns. The European Union is effectively prioritizing individual privacy rights over free speech, while the United States does the reverse. Brussels will fixate on fairness, even at the cost of the bloc's own economic competitiveness, while Washington will generally avoid getting in the way of its tech champions. For example, while the European Union will argue that Google's dominance in multiple technological applications is by itself an abuse of its power that stifles competition, the United States will refrain from raising the antitrust flag unless tech giants are using their dominant position to raise prices for consumers.

U.S. and European government policy overlap instead in their growing scrutiny over foreign investment in sensitive technology sectors. Of particular concern is China's aggressive, tech-focused overseas investment drive and the already deep integration of Chinese hardware and software in key technologies used globally. A highly diversified company like Huawei, a pioneer in cutting-edge technologies like 5G and a mass producer of smartphones and telecommunications equipment, can leverage its global market share to play an influential role in setting international standards.

Washington, meanwhile, is lagging behind Brussels and Beijing in the race to establish international norms for cyber policy. While China and Russia have been persistent in their attempts to use international venues like the United Nations to codify their version of state-heavy cyber policy, the European Union has worked to block those efforts while pushing their own standards like GDPR.

This emerging dynamic of tightening restrictions in the West overall against Chinese tech encroachment, Europe's aggressive regulatory push against U.S. tech giants and China's defense and export of digital authoritarianism may altogether lead to a much more balkanized market for global tech companies in the future.

The AI Political Test of the Century

There is no shortage of AI reports by big-name consulting firms telegraphing to corporate audiences the massive productivity gains to come from AI in a range of industries, from financial, auto, insurance and retail to construction, cleaning and security. A 2017 PwC report estimated that AI could add $15.7 trillion to the global economy in 2030, of which $6.6 trillion would come from increased productivity and $9.1 trillion would come from increased consumption. The potential for double-digit impacts on GDP after years of stalled growth in much of the world is appealing, no doubt.

But lurking behind those massive figures is the question of just how well, how quickly and how much of a country's workforce will be able to adapt to these fast-moving changes. As Austrian Joseph Schumpeter described in his 1942 book, Capitalism, Socialism and Democracy, the "creative destruction" that results from so-called industrial mutations "incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one." In the age of AI, the market will incessantly seek out scientists and creative thinkers. Machines will endlessly render millions of workers irrelevant. And new jobs, from AI empathy trainers to life coaches, will be created. Even as technology translates into productivity and economic gains overall, this will be a wrenching transition if workers are slow to learn new skills and if wage growth remains stagnant for much of the population.

Time will tell which model will be better able to cope with an expected rise in political angst as the world undergoes this AI revolution: China's untested model of digital authoritarianism or the West's time-tested, yet embattled, tradition in liberal democracy.
Title: Stratfor: How the US-China Power Comp. shapes the future of AI ethics
Post by: Crafty_Dog on October 18, 2018, 09:57:49 AM
second post

How the U.S.-China Power Competition Is Shaping the Future of AI Ethics
By Rebecca Keller
Senior Science and Technology Analyst, Stratfor
Rebecca Keller
Rebecca Keller
Senior Science and Technology Analyst, Stratfor
A U.S. Air Force MQ-1B Predator unmanned aerial vehicle returns from a mission to an air base in the Persian Gulf region.


    As artificial intelligence applications develop and expand, countries and corporations will have different opinions on how and when technologies should be employed. First movers like the United States and China will have an advantage in setting international standards.
    China will push back against existing Western-led ethical norms as its level of global influence rises and the major powers race to become technologically dominant.
    In the future, ethical decisions that prevent adoption of artificial intelligence applications in certain fields could limit political, security and economic advantages for specific countries.

Controversial new technologies such as automation and artificial intelligence are quickly becoming ubiquitous, prompting ethical questions about their uses in both the private and state spheres. A broader shift on the global stage will drive the regulations and societal standards that will, in turn, influence technological adoption. As countries and corporations race to achieve technological dominance, they will engage in a tug of war between different sets of values while striving to establish ethical standards. Western values have long been dominant in setting these standards, as the United States has traditionally been the most influential innovative global force. But China, which has successfully prioritized economic growth and technological development over the past several decades, is likely to play a bigger role in the future when it comes to tech ethics.

The Big Picture

The great power competition between China and the United States continues to evolve, leading to pushback against international norms, organizations and oversight. As the world sits at a key intersection of geopolitical and technological development, the battles to set new global standards will play out on emerging technological stages.

The field of artificial intelligence will be one of the biggest areas where different players will be working to establish regulatory guardrails and answer ethical questions in the future. Science fiction writer Isaac Asimov wrote his influential laws of robotics in the first half of the 20th century, and reality is now catching up to fiction. Questions over the ethics of AI and its potential applications are numerous: What constitutes bias within the algorithms? Who owns data? What privacy measures should be employed? And just how much control should humans retain in applying AI-driven automation? For many of these questions, there is no easy answer. And in fact, as the great power competition between China and the United States ramps up, they prompt another question: Who is going to answer them?

Questions of right and wrong are based on the inherent cultural values ingrained within a place. From an economic perspective, the Western ideal has always been the laissez-faire economy. And ethically, Western norms have prioritized privacy and the importance of human rights. But China is challenging those norms and ideals, as it uses a powerful state hand to run its economy and often chooses to sacrifice privacy in the name of development. On yet another front, societal trust in technology can also differ, influencing the commercial and military use of artificial intelligence.

Different Approaches to Privacy

One area where countries that intend to set global ethical standards for the future of technology have focused their attention is in the use and monetization of personal data. From a scientific perspective, more data equals better, smarter AI, meaning those with access to and a willingness to use that data could have a future advantage. However, ethical concerns over data ownership and the privacy of individuals and even corporations can and do limit data dispersion and use.

How various entities are handling the question of data privacy is an early gauge for how far AI application can go, in private and commercial use. It is also a question that reveals a major divergence in values. With its General Data Protection Regulation, which went into effect this year, the European Union has taken an early global lead on protecting the rights of individuals. Several U.S. states have passed or are working to pass similar legislation, and the U.S. government is currently considering an overarching federal policy that covers individual data privacy rights.

China, on the other hand, has demonstrated a willingness to prioritize the betterment of the state over the value of personal privacy. The Chinese public is generally supportive of initiatives that use personal data and apply algorithms. For example, there has been little domestic objection to a new state-driven initiative to monitor behavior — from purchases to social media activity to travel — using AI to assign a corresponding "social score." The score would translate to a level of "trustworthiness" that would allow, or deny, access to certain privileges. The program, meant to be fully operational by 2020, will track citizens, government officials and businesses. Similarly, facial recognition technology is already used, though not ubiquitously, throughout the country and is projected to play an increasingly important role in Chinese law enforcement and governance. China's reliance on such algorithmic-based systems would make it among the first entities to place such a hefty reliance on the decision-making capabilities of computers.

When Ethics Cross Borders and Machine Autonomy Increases

Within a country's borders, the use of AI technology for domestic security and governance purposes may certainly raise questions from human rights groups, but those questions are amplified when use of the technology crosses borders and affects international relationships. One example is Google's potential project to develop a censored search app for the Chinese market. By intending to take advantage of China's market by adhering to the country's rules and regulations, Google could also be seen as perpetuating the Chinese government's values and views on censorship. The company left China in 2010 over objections to that very matter.

Ever-improving algorithms and applications will soon prompt queries about how much autonomy machines "should" have, going far beyond today's credit scores, loans or even social scores.

And these current issues are relatively small in comparison to questions looming on the horizon. Ever-improving algorithms and applications will soon prompt queries about how much autonomy machines "should" have, going far beyond today's credit scores, loans or even social scores. Take automated driving, for example, a seemingly more innocuous application of artificial intelligence and automation. How much control should a human have while in a vehicle? If there is no human involved, who is responsible if and when there is an accident? The answer varies depending where the question is asked. In societies that trust in technology more, like Japan, South Korea or China, the ability to remove key components from cars, such as steering wheels, in the future will likely be easier. In the United States, despite its technological prowess and even as General Motors is applying for the ability to put cars without steering wheels on the road, the current U.S. administration appears wary.
Defense, the Human Element and the First Rule of Robotics

Closely paraphrased, Asimov's first rule of robotics is that a robot should never harm a human through action or inaction. The writer was known as a futurist and thinker, and his rule still resonates. In terms of global governance and international policy, decisions over the limits of AI's decision-making power will be vital to determining the future of the military. How much human involvement, after all, should be required when it comes to decisions that could result in the loss of human life? Advancements in AI will drive the development of remote and asymmetric warfare, requiring the U.S. Department of Defense to make ethical decisions prompted by both Silicon Valley and the Chinese government.

At the dawn of the nuclear age, the scientific community questioned the ethical nature of using nuclear understanding for military purposes. More recently, companies in Silicon Valley have been asking similar questions about whether their technological developments should be used in warfare. Google has been vocal about its objections to working with the U.S. military. After controversy and internal dissent about the company's role in Project Maven, a Pentagon-led project to incorporate AI into the U.S. defense strategy, Google CEO Sundar Pinchai penned the company's own rules of AI ethics, which required, much like Asimov intended, that it not develop AI for weaponry or uses that would cause harm. Pinchai also stated that Google would not contribute to the use of AI in surveillance that pushes boundaries of "internationally accepted norms." Recently, Google pulled out of bidding for a Defense Department cloud computing project as part of JEDI (Joint Enterprise Defense Initiative). Microsoft employees also issued a public letter voicing objections to their own company's intent to bid for the same contract. Meanwhile, Amazon's CEO, Jeff Bezos, whose company is still in the running for the JEDI contract, has bucked this trend, voicing his belief that technology companies partnering with the U.S. military is necessary to ensure national security.

There are already certain ethical guidelines in place when it comes to integrating AI into military operations. Western militaries, including that of the United States, have pledged to always maintain a "human-in-the-loop" structure for operations involving armed unmanned vehicles, so as to avoid the ethical and legal consequences of AI-driven attacks. But these rules may evolve as technology improves. The desire for quick decisions, the high cost of human labor and basic efficiency needs are all bound to challenge countries' commitment to keeping a human in the loop. After all, AI could function like a non-human commander, making command and control decisions conceivably better than any human general could.

Even if the United States still abides by the guidelines, other countries — like China — may have far less motivation to do so. China has already challenged international norms in a number of arenas, including the World Trade Organization, and may well see it as a strategic imperative to employ AI in controversial ways to advance its military might. It's unclear where China will draw the line and how it will match up with Western military norms. But it's relatively certain that if one great power begins implementing cutting-edge technology in controversial ways, others will be forced to consider whether they are willing to let competing countries set ethical norms.
Rebecca Keller focuses on areas where science and technology intersect with geopolitics. This diverse area of responsibility includes changes in agricultural technology and water supplies that affect global food supplies, nanotechnology and other developments.
Title: Stratfor: AI makes personal privacy a matter of national strategy
Post by: Crafty_Dog on October 18, 2018, 09:59:51 AM
Third post

AI Makes Personal Privacy a Matter of National Strategy
By Rebecca Keller
Senior Science and Technology Analyst, Stratfor
Rebecca Keller
Rebecca Keller
Senior Science and Technology Analyst, Stratfor

The latest social media scandals have generated a backlash in the United States among internet users who want greater control over their personal data. But AI runs on data. AI algorithms use robust sets of data to learn, honing their pattern recognition and prediction abilities. And much of that data comes from individuals.
(GarryKillian, Andrea Danti/Shutterstock, Robert Zavala/Stratfor)

    Growing concern in the United States and Europe over the collection and distribution of personal data could decrease the quality and accessibility of a subset of data used to develop artificial intelligence.
    Though the United States is still among the countries best poised to take advantage of AI technologies to drive economic growth, changes in privacy regulations and social behaviors will impair its tech sector over the course of the next decade.
    China, meanwhile, will take the opportunity to close the gap with the United States in the race to develop AI. 

It seems that hardly a 24-hour news cycle passes without a story about the latest social media controversy. We worry about who has our information, who knows our buying habits or searching habits, and who may be turning that information into targeted ads for products or politicians. Calls for stricter control and protection of privacy and for greater transparency follow. Europe will implement a new set of privacy regulations later this month — the culmination of a yearslong negotiating process and a move that could ease the way for similar policies in the United States, however eventually. Individuals, meanwhile, may take their own steps to guard their data. The implications of that reaction could reverberate far beyond our laptops or smartphones. It will handicap the United States in the next leg of the technology race with China.

Big Picture

Artificial intelligence is more than simply a disruptive technology. It is poised to become an anchor for the Fourth Industrial Revolution and to change the factors that contribute to economic growth. As AI develops at varying rates throughout the world, it will influence the global competition underway between the world's great powers.

See The 4th Industrial Revolution

More than a quarter-century after the fall of the Soviet Union, the world is slowly shifting away from a unipolar system. As the great powers compete for global influence, technology will become an increasingly important part of their struggle. The advent of disruptive technologies such as artificial intelligence stands to revolutionize the ways in which economies function by changing the weight of the factors that fuel economic growth. In several key sectors, China is quickly catching up to its closest competitor in technology, the United States. And in AI, it could soon gain an advantage.

Of the major contenders in the AI arena today, China places the least value on individual privacy, while the European Union places the most. The United States is somewhere in between, though recent events seem to be pushing the country toward more rigorous privacy policies. Since the scandal erupted over Cambridge Analytica's use of Facebook data to target political ads in the 2016 presidential election, outcry has been building in the United States among internet users who want greater control over their personal data. But AI runs on data. AI algorithms use robust sets of data to learn, honing their pattern recognition and predictive abilities. Much of that data comes from individuals.

Learning to Read Personal Data

Online platforms such as social media networks, retail sites, search engines and ride-hailing apps all collect vast amounts of data from their users. Facebook collects a total of nearly 200 billion data points in 98 categories. Amazon's virtual assistant, Alexa, tracks numerous aspects of its users' behavior. Medical databases and genealogy websites gather troves of health and genetic information, and the GPS on our smartphones can track our every move. Drawing on this wealth of data, AI applications could evolve that would revolutionize aspects of everyday life far beyond online shopping. The data could enable applications to track diseases and prevent or mitigate future outbreaks, to help solve cold criminal cases, to relieve traffic congestion, to better assess risk for insurers, or to increase the efficiency of electrical grids and decrease emissions. The potential productivity gains that these innovations offer, in turn, would boost global economic growth.

Using the wealth of data that online platforms collect, AI applications could evolve to revolutionize aspects of everyday life far beyond online shopping.

To reap the greatest benefit, however, developers can't use just any data. Quality is as important as quantity, and that means ensuring that data collection methods are free of inherent bias. Preselecting participants for a particular data set, for example, would introduce bias to it. Likewise, placing a higher value on privacy, as many countries in the West are doing today, could skew data toward certain economic classes. Not all internet users, after all, will have the resources to pay to use online platforms that better protect personal data or to make informed choices about their privacy.

Calls for greater transparency in data collection also will pose a challenge for AI developers in the West. The European Union's General Data Protection Regulation, effective May 25, will tighten restrictions on all companies that handle the data of EU citizens, many of which are headquartered in the United States. The new regulation may prove difficult to enforce in practice, but it will nevertheless force companies around the world to improve their data transparency. And though the United States is still in the best position to take economic advantage of the AI revolution, thanks to its regulatory environment, the growing cultural emphasis on privacy could hinder technological development over the next decade.

The Privacy Handicap

As a general rule, precautionary regulations pose a serious threat to technological progress. The European Union historically has been more proactive than reactive in regulating innovation, a tendency that has done its part in hampering the EU tech sector. The United States, on the other hand, traditionally has fallen into the category of permissionless innovator — that is, a country that allows technological innovations to develop freely before devising the regulations to govern them. This approach has facilitated its rise to the fore in the global tech scene. While the United States still leads the pack in AI, recent concerns about civil liberties could slow it down relative to other tech heavyweights, namely China. The public demands for transparency and privacy aren't going away anytime soon. Furthermore, as AI becomes more powerful, differential privacy — the ability to extract personal information without identifying its source — will become more difficult to preserve.

These are issues that China doesn't have to worry about yet. For the most part, Chinese citizens don't have the same sensitivity over matters of individual privacy as their counterparts in the West. And China is emerging as a permissionless innovator, like the United States. Chinese privacy protections are vague and give the state wide latitude to collect information for security purposes. As a result, its government and the companies working with it have more of the information they need to make their own AI push, which President Xi Jinping has highlighted as a key national priority. Chinese tech giants Baidu, Alibaba and Tencent are all heavily invested in AI and are working to gather as much data as possible to build their AI empire. Together, these factors could help China gain ground on its competition.

In the long run, however, privacy is likely to become a greater priority in China. Chinese corporations value privacy, despite their history of intellectual property violations against the West, and they will take pains to protect their innovations. In addition, the country's younger generations and growing middle class probably will have more of an interest in securing their personal information. A recent art exhibit in China displayed the online data of more than 300,000 individuals, indicating a growing awareness of internet privacy among the country's citizenry.

Even so, over the course of the next decade, the growing concern in the West over privacy could hobble the United States in the AI race. The push for stronger privacy protections may decrease the quality of the data U.S. tech companies use to train and test their AI applications. But the playing field may well even out again. As AI applications continue to improve, more people in the United States will come to recognize their wide-ranging benefits in daily life and in the economy. The value of privacy is constantly in flux; the modern-day notion of a "right to privacy" didn't take shape in the United States until the mid-20th century. In time, U.S. citizens may once again be willing to sacrifice their privacy in exchange for a better life.

Rebecca Keller focuses on areas where science and technology intersect with geopolitics. This diverse area of responsibility includes changes in agricultural technology and water supplies that affect global food supplies, nanotechnology and other developments.
Title: Karl Friston, the Free Energy Principal and Artificial Intelligence
Post by: Crafty_Dog on November 17, 2018, 10:42:38 AM
https://www.wired.com/story/karl-friston-free-energy-principle-artificial-intelligence/?fbclid=IwAR3S1fnz7hby3wiiem2rHBa0VebxVc-shz7TjTlRVdpKPlPvpwNNnoGaFOM
Title: Business Intelligence Expert
Post by: Crafty_Dog on November 17, 2018, 10:44:40 AM
second post

https://www.nytimes.com/2018/11/11/business/intelligence-expert-wall-street.html
Title: Brainwaves encode the grammar of human language
Post by: Crafty_Dog on November 18, 2018, 10:48:11 AM
http://maxplanck.nautil.us/article/341/brainwaves-encode-the-grammar-of-human-language?utm_source=Nautilus&utm_campaign=4ce4a84e17-EMAIL_CAMPAIGN_2018_11_16_11_07&utm_medium=email&utm_term=0_dc96ec7a9d-4ce4a84e17-61805061
Title: Radical New Neural Network could overcome big challenges in AI
Post by: Crafty_Dog on December 13, 2018, 07:46:51 AM


https://www.technologyreview.com/s/612561/a-radical-new-neural-network-design-could-overcome-big-challenges-in-ai/?fbclid=IwAR3uYX6zQ2u28OfvjuNyMEW5chMzELpiiOSDbuqL1eCuD5lO6BaNEK_QpfU
Title: The man turning China into a quantum superpower
Post by: Crafty_Dog on December 23, 2018, 12:10:25 PM
https://www.technologyreview.com/s/612596/the-man-turning-china-into-a-quantum-superpower/?utm_source=pocket&utm_medium=email&utm_campaign=pockethits
Title: Jordan Peterson: Intelligence, Race, and the Jewish Question
Post by: Crafty_Dog on December 30, 2018, 11:46:52 AM


https://www.youtube.com/watch?v=m91vhePuzdo
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: Crafty_Dog on January 14, 2019, 01:35:09 PM
Very interesting and scary piece on AI on this week's "60 Minutes"-- worth tracking down.
Title: AI Fake Text generator to dangerous to release?
Post by: Crafty_Dog on February 15, 2019, 11:48:16 AM


https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction?fbclid=IwAR0zuK-7FQXyfld2XEVKBCRW0afuXRlMCitVLRUr061kmiJf8u12mLk0Sk0
Title: Are we close to solving the puzzle of consciousness?
Post by: Crafty_Dog on April 01, 2019, 12:34:53 AM
http://www.bbc.com/future/story/20190326-are-we-close-to-solving-the-puzzle-of-consciousness?fbclid=IwAR3imjeuYOdUEEzCHzUCozS9NjBVUXB_oDhvUwSHS0OY1JOPdcgIv4FZOLI
Title: The challenge of going off psychiatric drugs
Post by: Crafty_Dog on April 02, 2019, 12:03:13 PM
https://www.newyorker.com/magazine/2019/04/08/the-challenge-of-going-off-psychiatric-drugs?utm_campaign=aud-dev&utm_source=nl&utm_brand=tny&utm_mailing=TNY_Magazine_Daily_040119&utm_medium=email&bxid=5be9d3fa3f92a40469e2d85c&user_id=50142053&esrc=&utm_term=TNY_Daily
Title: Chinese Eugenics
Post by: Crafty_Dog on April 13, 2019, 10:47:46 PM
https://futurism.com/the-byte/chinese-scientists-super-monkeys-human-brain-genes?fbclid=IwAR2iE3DS7Prc5aOn72VvUYT1osa3w-8qRUHiaFEc5WU35pfTHqIsQ58lu9Y
Title: An ethological approach to reason
Post by: Crafty_Dog on May 13, 2019, 11:41:49 AM
http://nautil.us/blog/the-problem-with-the-way-scientists-study-reason?fbclid=IwAR0I4_cnBrzARrCapxdsOvQlnwX4wPmFKMbcJ7LguWB4QTILidC6t3ezeOg
Title: The Geometry of Thought
Post by: Crafty_Dog on September 29, 2019, 08:04:56 PM


https://getpocket.com/explore/item/new-evidence-for-the-strange-geometry-of-thought?utm_source=pocket-newtab&fbclid=IwAR1k6-QAx0THHQJ7gJ-6iWRSQf8qw5RUGpFw-BadNABpynbQ2lYnTMcljxo