Dog Brothers Public Forum


Welcome, Guest. Please login or register.
February 23, 2018, 06:38:38 PM

Login with username, password and session length
Search:     Advanced search
Welcome to the Dog Brothers Public Forum.
107488 Posts in 2404 Topics by 1095 Members
Latest Member: dannysamuel
* Home Help Search Login Register
+  Dog Brothers Public Forum
|-+  Politics, Religion, Science, Culture and Humanities
| |-+  Science, Culture, & Humanities
| | |-+  Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« previous next »
Pages: 1 ... 16 17 [18] 19 20 ... 22 Print
Author Topic: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments  (Read 339951 times)
Power User
Posts: 42527

« Reply #850 on: October 03, 2013, 07:56:06 AM »

 WASHINGTON — The National Security Agency conducted a secret pilot project in 2010 and 2011 to test the collection of bulk data about the location of Americans’ cellphones, but the agency never moved ahead with such a program, according to intelligence officials.

The existence of the pilot project was reported on Wednesday morning by The New York Times and later confirmed by James R. Clapper, the director of national intelligence, at a Senate Judiciary Committee hearing. The project used data from cellphone towers to locate people’s cellphones.

In his testimony, Mr. Clapper revealed few details about the project. He said that the N.S.A. does not currently collect locational information under Section 215 of the Patriot Act, the provision the government says is the legal basis for the N.S.A.’s once-secret program under which it collects logs of all domestic calls from telephone companies.

“In 2010 and 2011, N.S.A. received samples in order to test the ability of its systems to handle the data format, but that data was not used for any other purpose and was never available for intelligence analysis purposes,” Mr. Clapper said.

He added that the N.S.A. had promised to notify Congress and seek the approval of a secret surveillance court in the future before any locational data was collected using Section 215.

An official familiar with the test project said its purpose was to see how the locational data would flow into the N.S.A.’s systems. While real data was used, it was never drawn upon in any investigation, the official said. It was unclear how many Americans’ locational data was collected as part of the project, whether the agency has held on to that information or why the program did not go forward.

But Senator Ron Wyden, an Oregon Democrat who receives classified briefings as a member of the Intelligence Committee and who has raised concerns about cellphone location tracking, said in a statement that there was more to know about the matter than the government had now declassified.

“After years of stonewalling on whether the government has ever tracked or planned to track the location of law-abiding Americans through their cellphones, once again, the intelligence leadership has decided to leave most of the real story secret — even when the truth would not compromise national security,” Mr. Wyden said.

 Gen. Keith B. Alexander, the director of the N.S.A., who also testified Wednesday at the hearing, sharply criticized an article on the agency in The New York Times on Sunday. He said it was “flat wrong” that the agency was “creating dossiers on Americans from social networks.” He added that “we’re not creating social networks on our families.”

 The article, based on documents leaked by the former N.S.A. contractor Edward J. Snowden, said that the agency changed a policy several years ago to allow “contact chaining” of Americans who had been in touch, directly or indirectly, with foreign intelligence suspects, using phone and e-mail logging data. It also described the process of data “enrichment,” by which other data — including information that is publicly or commercially available — is added to flesh out analysts’ understanding of people associated with various phone numbers in the social network analysis.

The article said it was not known how many Americans’ data was used in this process.

The chairman of the Senate Judiciary Committee, Senator Patrick Leahy, Democrat of Vermont, said Wednesday that he was drafting legislation to eliminate the N.S.A.’s ability to systematically obtain Americans’ calling records.

“The government has not made its case that bulk collection of domestic phone records is an effective counterterrorism tool, especially in light of the intrusion on American privacy,” Mr. Leahy said.

But Senator Dianne Feinstein of California, the chairwoman of the Senate Intelligence committee, warned that ending the bulk call records program would increase the risk of a terrorist attack.

“I so regret what is happening; I will do everything I can to prevent this program from being canceled out,” she said.

Questions about what, if anything, the agency has been doing to track Americans’ movements using cellphone location data have been simmering for years. The issue flared up again after an ambiguous exchange between Mr. Wyden and General Alexander at a Senate Intelligence Committee hearing last week.

Mr. Wyden has been a critic of domestic surveillance programs and filed legislation in 2011 and again this year to require warrants for obtaining someone’s locational data in a criminal investigation. He has not disclosed what prompted his concerns.

At the hearing last week, Mr. Wyden asked Mr. Alexander “whether the N.S.A. has ever collected or made any plans to collect Americans’ cell-site information in bulk.”

General Alexander replied that the N.S.A. was not “receiving cell-site location data and has no current plans to do so” under Section 215 of the Patriot Act, which allows the secret surveillance court to issue orders for records from businesses — like telephone companies — if the records are “relevant” to an intelligence investigation.

But General Alexander also said last week that there was other classified information that the N.S.A. had sent to the committee that provided “additional detail.”

It is unclear whether long-term tracking of people’s movements by the government raises privacy rights under the Fourth Amendment. In a 1979 case involving the small-scale collection of calling logs, the Supreme Court ruled that such records were not protected by constitutional privacy rights because people had already revealed the existence of their calls to telephone companies.

But in a 2012 case about the police’s use of a GPS tracker attached to a suspect’s car, five justices suggested that any long-term, automated collection of a person’s publicly displayed actions might raise Fourth Amendment issues.
« Last Edit: October 03, 2013, 08:06:02 AM by Crafty_Dog » Logged
Power User
Posts: 42527

« Reply #851 on: October 10, 2013, 07:03:58 AM »

Monitoring Your Every Move
Published: October 9, 2013

You may have even less privacy than you thought.


Most Internet users know that Web sites and advertisers monitor what they do online and use that information to pitch products and services. What’s not as well known is that these companies can track individuals as they move between devices like personal computers, cellphones and tablets. This type of “cross-device” tracking raises significant privacy concerns because most users are simply unaware that it is taking place.

Internet companies capable of such monitoring do it through various means, including by figuring out if different devices are using the same Internet connection and are visiting the same Web sites and mobile apps. If, for instance, you have used your home computer to research a Hawaiian vacation, travel companies can show you ads for flights to Honolulu on apps you use on your cellphone.

Internet businesses argue that such targeting benefits everybody: advertisers get access to customers who are more likely to buy their products while individuals receive offers for stuff they are interested in. (The New York Times’s mobile apps include software from advertising networks that gather nonpersonal information about how readers use the newspaper.)

But there’s also a big privacy issue. Many Americans worry that the Internet has already extracted more personal information about them they would like. Now comes the news that advertisers can follow people from work computer to tablet computer to cellphone even though those devices are not connected to one another. New technology also allows advertisers access to mobile phones without the “cookies” they need to access personal computers. This makes it harder than ever for users to escape the gaze of private companies.

By connecting information from these devices, database companies that collect information can know a lot more about individuals than previously thought possible, including, for instance, their physical location and the identity of family members, friends and colleagues. The use of this information to target advertising might amount to a mere annoyance to most people. But such information could also end up in detailed individual profiles that could be obtained by government agencies or purchased by employers or banks to evaluate candidates for jobs or loans.

At some point, the makers of computers, phones and software may devise new tools that allow people to protect themselves from sophisticated forms of tracking. But they will always be one step behind firms that are in the business of collecting information.

The best solution is for lawmakers to pass legislation that sets clear rules that would regulate and limit how businesses collect personal information, what they can use it for and how long they keep it. The rules, which could be enforced by the Federal Trade Commission, should also give consumers an easy way to review files about themselves or simply choose not to have the information collected. At the moment, the advantage on the Internet lies increasingly with the data miners and the advertisers, not the consumer.

Meet The New York Times’s Editorial Board »
Power User
Posts: 9482

« Reply #852 on: October 10, 2013, 11:20:55 AM »

Good points, but since it is the NYT, a loss of privacy is troubling if the aim is private commerce or to advance national security, but not when it occurs in a government healthcare takeover.
Power User
Posts: 42527

« Reply #853 on: October 14, 2013, 10:11:27 AM »
Power User
Posts: 42527

« Reply #854 on: October 15, 2013, 10:05:43 PM »
Power User
Posts: 42527

« Reply #855 on: October 17, 2013, 07:21:54 AM »

Door May Open for Challenge to Secret Wiretaps
Published: October 16, 2013 1 Comment


WASHINGTON — Five years after Congress authorized a sweeping warrantless surveillance program, the Justice Department is setting up a potential Supreme Court test of whether it is constitutional by notifying a criminal defendant — for the first time — that evidence against him derived from the eavesdropping, according to officials.

Senator Dianne Feinstein gave a speech in 2012 that some took to suggest that warrantless wiretaps contributed to several terrorism cases. A Senate lawyer now says she was misunderstood.

Prosecutors plan to inform the defendant about the monitoring in the next two weeks, a law enforcement official said. The move comes after an internal Justice Department debate in which Solicitor General Donald B. Verrilli Jr. argued that there was no legal basis for a previous practice of not disclosing links to such surveillance, several Obama administration officials familiar with the deliberations said.

Meanwhile, the department’s National Security Division is combing active and closed case files to identify other defendants who faced evidence resulting from the 2008 wiretapping law. It permits eavesdropping without warrants on Americans’ cross-border phone calls and e-mails so long as the surveillance is “targeted” at foreigners abroad.

It is not yet clear how many other such cases there are, nor whether prosecutors will notify convicts whose cases are already over. Such a decision could set off attempts to reopen those cases.

“It’s of real legal importance that components of the Justice Department disagreed about when they had a duty to tell a defendant that the surveillance program was used,” said Daniel Richman, a Columbia University law professor. “It’s a big deal because one view covers so many more cases than the other, and this is an issue that should have come up repeatedly over the years.”

The officials spoke on the condition of anonymity because they were not authorized to disclose internal discussions. The Wall Street Journal  previously reported on a recent court filing in which the department, reversing an earlier stance, said it was obliged to disclose to defendants if evidence used in court was linked to warrantless surveillance, but it remained unclear if there were any such cases.

The debate was part of the fallout about National Security Agency surveillance set off by leaks by Edward J. Snowden, the former N.S.A. contractor. They have drawn attention to the 2008 law, the FISA Amendments Act, which legalized a form of the Bush administration’s once-secret warrantless surveillance program.

In February, the Supreme Court dismissed a case challenging its constitutionality because the plaintiffs, led by Amnesty International, could not prove they had been wiretapped. Mr. Verrilli had told the justices that someone else would have legal standing to trigger review of the program because prosecutors would notify people facing evidence derived from surveillance under the 2008 law.

But it turned out that Mr. Verrilli’s assurances clashed with the practices of national security prosecutors, who had not been alerting such defendants that evidence in their cases had stemmed from wiretapping their conversations without a warrant.

Jameel Jaffer, an American Civil Liberties Union lawyer who argued in the Supreme Court on behalf of the plaintiffs challenging the 2008 law, said that someone in the Justice Department should have flagged the issue earlier and that the department must do more than change its practice going forward.

“The government has an obligation to tell the Supreme Court, in some formal way, that a claim it made repeatedly, and that the court relied on in its decision, was simply not true,” he said. “And it has an obligation to notify the criminal defendants whose communications were monitored under the statute that their communications were monitored.”

A Justice Department spokesman declined to comment. The department’s practices came under scrutiny after a December 2012 speech by Senator Dianne Feinstein, the chairwoman of the Intelligence Committee. During debate over extending the 2008 law, she warned that terrorism remained a threat. Listing several terrorism-related arrests, she added, “so this has worked.”

Lawyers in two of the cases Ms. Feinstein mentioned — one in Fort Lauderdale and one in Chicago — asked prosecutors this spring to confirm that surveillance under the 2008 law had played a role in the investigations of their clients so they could challenge it.

But prosecutors said they did not have to make such a disclosure. On June 7, The New York Times published an article citing Ms. Feinstein’s speech and the stance the prosecutors had taken.

As a result, Mr. Verrilli sought an explanation from national security lawyers about why they had not flagged the issue when vetting his Supreme Court briefs and helping him practice for the arguments, according to officials.


Page 2 of 2)

The national security lawyers explained that it was a misunderstanding, the officials said. Because the rules on wiretapping warrants in foreign intelligence cases are different from the rules in ordinary criminal investigations, they said, the division has long used a narrow understanding of what “derived from” means in terms of when it must disclose specifics to defendants.

In national security cases involving orders issued under the Foreign Intelligence Surveillance Act of 1978, or FISA, prosecutors alert defendants only that some evidence derives from a FISA wiretap, but not details like whether there had just been one order or a chain of several. Only judges see those details.

After the 2008 law, that generic approach meant that prosecutors did not disclose when some traditional FISA wiretap orders had been obtained using information gathered through the warrantless wiretapping program. Division officials believed it would have to disclose the use of that program only if it introduced a recorded phone call or intercepted e-mail gathered directly from the program — and for five years, they avoided doing so.

For Mr. Verrilli, that raised a more fundamental question: was there any persuasive legal basis for failing to clearly notify defendants that they faced evidence linked to the 2008 warrantless surveillance law, thereby preventing them from knowing that they had an opportunity to argue that it derived from an unconstitutional search?

The debate stretched through June and July, officials said, including multiple meetings and dueling memorandums by lawyers in the solicitor general office and in the national security division, which has been led since March by acting Assistant Attorney General John Carlin. The deliberations were overseen by James Cole, the deputy attorney general.

National security lawyers and a policy advisory committee of senior United States attorneys focused on operational worries: Disclosure risked alerting foreign targets that their communications were being monitored, so intelligence agencies might become reluctant to share information with law enforcement officials that could become a problem in a later trial.

But Mr. Verrilli argued that withholding disclosure from defendants could not be justified legally, officials said. Lawyers with several agencies — including the Federal Bureau of Investigation, the N.S.A. and the office of the director of national intelligence — concurred, officials said, and the division changed the practice going forward.

National Security Division lawyers began looking at other cases, eventually identifying the one that will be publicly identified soon and are still looking through closed cases and deciding what to do about them.

But in a twist, in the Chicago and Fort Lauderdale cases that Ms. Feinstein had mentioned, prosecutors made new court filings saying they did not intend to use any evidence derived from surveillance of the defendants under the 2008 law.

When defense lawyers asked about Ms. Feinstein’s remarks, a Senate lawyer responded in a letter that she “did not state, and did not mean to state” that those cases were linked to the warrantless surveillance program. Rather, the lawyer wrote, her point was that terrorism remained a problem.

In a recent court filing, the lawyers wrote that it is “hard to believe” Ms. Feinstein would cite “random” cases when pressing to reauthorize the 2008 law, suggesting either that the government is still concealing something or that she had employed the “politics of fear” to influence the debate. A spokesman for Ms. Feinstein said she preferred to let the letter speak for itself.
Power User
Posts: 42527

« Reply #856 on: October 23, 2013, 08:26:08 PM »

Hat tip to BD; pasting this here from his post in the Constitutional Law thread
Power User
Posts: 42527

« Reply #857 on: October 31, 2013, 12:27:49 PM »
Power User
Posts: 42527

« Reply #858 on: November 02, 2013, 10:43:40 AM »

Delaware, Den of Thieves?
Published: November 1, 2013 69 Comments

OUTSIDE of crimes of passion, criminal activity is typically motivated by greed.

As a special agent for the Treasury Department, I investigated financial crimes like money laundering and terrorism financing. I trained foreign police forces to “follow the money” and track the flow of capital across borders.

During these training sessions, I’d often hear this: “My agency has a financial crimes investigation. The money trail leads to the American state of Delaware. We can’t get any information and don’t know what to do. We are going to have to close our investigation. Can you help?"

The question embarrassed me. There was nothing I could do.

In the years I was assigned to Treasury’s Financial Crimes Enforcement Network, or Fincen, I observed many formal requests for assistance having to do with companies associated with Delaware, Nevada or Wyoming. These states have a tawdry image: they have become nearly synonymous with underground financing, tax evasion and other bad deeds facilitated by anonymous shell companies — or by companies lacking information on their “beneficial owners,” the person or entity that actually controls the company, not the (often meaningless) name under which the company is registered.

Our State and Treasury Departments routinely identify countries that are havens for financial crimes. But, whether because of shortsightedness or hypocrisy, we overlook the financial crimes that are abetted in our own country by lax state laws. While the problem is concentrated in Delaware, there has been a “race to the bottom” by other states that have enacted corporate secrecy laws to try to attract incorporation fees.

The Financial Action Task Force, an international body that sets standards for the fight against money laundering, terrorist financing and other threats to the international financial system, has repeatedly criticized America for failing to comply with a guideline requiring the disclosure of beneficial ownership information. The Organization for Economic Cooperation and Development, with which the task force is affiliated, has championed international standards for financial transparency, but cannot compel compliance.

Watchdog groups like the Organized Crime and Corruption Reporting Project, Global Financial Integrity and Global Witness say that anonymous companies registered in the United States have become the vehicle of choice for drug dealers, organized criminals and corrupt politicians to evade taxes and launder illicit funds. A study by researchers at Brigham Young University, the University of Texas and Griffith University in Australia concluded that America was the second easiest country, after Kenya, in which to incorporate a shell company.

Domestic law enforcement agencies are as stymied as foreign ones. In one case I worked on, American investigators had to give up their examination of a Nevada-based corporation that had received more than 3,700 suspicious wire transfers totaling $81 million over two years. The case did not result in prosecution because the investigators could not definitively identify the owners.

Anonymous corporations are not only favored tools of criminals, but they also facilitate corruption, particularly in the developing world. A recent World Bank study found that the United States was the favored destination for corrupt foreign politicians opening phantom companies to conceal their ill-gotten gains.

Last month, Representatives Maxine Waters of California and Carolyn B. Maloney of New York, the top Democrats on the House Financial Services Committee, introduced legislation that would require United States corporations to disclose to the Treasury Department their beneficial owners. On Thursday, Prime Minister David Cameron of Britain went even further, announcing that a planned national registry of companies’ true owners would be open to the public, not just to law enforcement authorities.

The proposal enjoys support from law enforcement experts like Dennis M. Lormel, who led the F.B.I.’s efforts against terrorism financing after 9/11, and the former Manhattan district attorney Robert M. Morgenthau (and his successor, Cyrus R. Vance Jr.).

While officials in Delaware, Wyoming and Nevada talk about their corporate “traditions,” I am unimpressed. Business incorporation fees have accounted for as much as a quarter of Delaware’s general revenues. It’s no surprise that officials in Dover and Wilmington want to protect their state’s status as a corporate registry, but if that means facilitating criminal activity, their stance is a form of willful blindness. America must require uniform corporate-registration practices if it is to persuade other nations to cooperate in the fight against financial crimes.

John A. Cassara, a former special agent for the Treasury Department, is the author, most recently, of a novel, “Demons of Gadara.”
Power User
Posts: 42527

« Reply #859 on: November 17, 2013, 12:52:02 AM »

      NSA Harvesting Contact Lists

A new Snowden document shows that the NSA is harvesting contact lists --
e-mail address books, IM buddy lists,  etc. -- from Google, Yahoo,
Microsoft, Facebook, and others.

Unlike PRISM, this unnamed program collects the data from the Internet .
  This is similar to how the NSA identifies Tor users.  They get direct
access to the Internet backbone, either through secret agreements with
companies like AT&T, or surreptitiously, by doing things like tapping
undersea cables.  Once they have the data, they have powerful packet
inspectors -- code names include TUMULT, TURBULENCE, and TURMOIL -- that
run a bunch of different identification and copying systems.  One of
them, code name unknown, searches for these contact lists and copies
them.  Google, Yahoo, Microsoft, etc., have no idea that this is
happening, nor have they consented to their data being harvested in this

These contact lists provide the NSA with the same sort of broad
surveillance that the Verizon (and others) phone-record "metadata"
collection programs provide: information about who are our friends,
lovers, confidants, associates.  This is incredibly intimate
information, all collected without any warrant or due process.  Metadata
equals surveillance; always remember that.

The quantities are interesting:

     During a single day last year, the NSA's Special Source
     Operations branch collected 444,743 e-mail address books from
     Yahoo, 105,068 from Hotmail, 82,857 from Facebook, 33,697 from
     Gmail and 22,881 from unspecified other providers....

Note that Gmail, which uses SSL by default, provides the NSA with much
less data than Yahoo, which doesn't, despite the fact that Gmail has
many more users than Yahoo does.  (It's actually kind of amazing how
small that Gmail number is.)  This implies that, despite BULLRUN,
encryption works.  Ubiquitous use of SSL can foil NSA eavesdropping.
This is the same lesson we learned from the NSA's attempts to break Tor:
encryption works.

In response to this story, Yahoo has finally decided to enable SSL by
default: by January 2014.

The "New York Times" makes this observation:

     Spokesmen for the eavesdropping organizations reassured The
     Post that we shouldn't bother our heads with all of this. They
     have "checks and balances built into our tools," said one
     intelligence official.

     Since the Snowden leaks began, the administration has adopted
     an interesting definition of that term. It used to be that
     "checks and balances" referred to one branch of the government
     checking and balancing the other branches -- like the Supreme
     Court deciding whether laws are constitutional.

     Now the N.S.A., the C.I.A. and the White House use the term to
     refer to a secret organization reviewing the actions it has
     taken and deciding in secret by itself whether they were legal
     and constitutional.

One more amusing bit: the NSA has a spam problem.

     Spam has proven to be a significant problem for the NSA --
     clogging databases with information that holds no foreign
     intelligence value. The majority of all e-mails, one NSA
     document says, "are SPAM from 'fake addresses and never
     'delivered' to targets."


The NSA at Tor:

How the NSA gets access:

Metadata equals surveillance:


Yahoo switching to SSL by default:

NSA source documents for the story:

"New York Times" story:
Power User
Posts: 42527

« Reply #860 on: November 17, 2013, 12:53:51 AM »

second post

NSA Eavesdropping on Google and Yahoo Networks

The "Washington Post" reported that the NSA is eavesdropping on the
Google and Yahoo private networks -- the code name for the program is
MUSCULAR.  I may write more about this later, but I have some initial

* It's a measure of how far off the rails the NSA has gone that it's
taking its Cold War–era eavesdropping tactics -- surreptitiously
eavesdropping on foreign networks -- and applying them to US
corporations.  It's skirting US law by targeting the portion of these
corporate networks outside the US.  It's the same sort of legal argument
the NSA used to justify collecting address books and buddy lists worldwide.

* Although the "Washington Post" article specifically talks about Google
and Yahoo, you have to assume that all the other major -- and many of
the minor -- cloud services are compromised this same way.  That means
Microsoft, Apple, Facebook, Twitter, MySpace, Badoo, Dropbox, and on and
on and on.

* It is well worth re-reading all the government denials about bulk
collection and direct access after PRISM was exposed.  It seems that
it's impossible to get the truth out of the NSA.  Its carefully worded
denials always seem to hide what's really going on.

* In light of this, PRISM is really just insurance: a way for the NSA to
get legal cover for information it already has.  My guess is that the
NSA collects the vast majority of its data surreptitiously, using
programs such as these.  Then, when it has to share the information with
the FBI or other organizations, it gets it again through a more public
program like PRISM.

* What this really shows is how robust the surveillance state is, and
how hard it will be to craft laws reining in the NSA.  All the bills
being discussed so far only address portions of the problem: specific
programs or specific legal justifications.  But the NSA's surveillance
infrastructure is much more robust than that.  It has many ways into our
data, and all sorts of tricks to get around the law.  Note this quote:

     John Schindler, a former NSA chief analyst and frequent
     defender who teaches at the Naval War College, said it is
     obvious why the agency would prefer to avoid restrictions where
     it can.

     "Look, NSA has platoons of lawyers, and their entire job is
     figuring out how to stay within the law and maximize collection
     by exploiting every loophole," he said. "It's fair to say the
     rules are less restrictive under Executive Order 12333 than
     they are under FISA," the Foreign Intelligence Surveillance

No surprise, really.  But it illustrates how difficult meaningful reform
will be.  I wrote this in September:

     It's time to start cleaning up this mess. We need a special
     prosecutor, one not tied to the military, the corporations
     complicit in these programs, or the current political
     leadership, whether Democrat or Republican. This prosecutor
     needs free rein to go through the NSA's files and discover the
     full extent of what the agency is doing, as well as enough
     technical staff who have the capability to understand it. He
     needs the power to subpoena government officials and take their
     sworn testimony. He needs the ability to bring criminal
     indictments where appropriate. And, of course, he needs the
     requisite security clearance to see it all.

     We also need something like South Africa's Truth and
     Reconciliation Commission, where both government and corporate
     employees can come forward and tell their stories about NSA
     eavesdropping without fear of reprisal.

Without this, crafting reform legislation will be impossible.

* We don't actually know if the NSA did this surreptitiously, or if it
had assistance from another US corporation.  Level 3 Communications
provides the data links to Google, and its statement was sufficiently
non-informative as to be suspicious:

     In a statement, Level 3 said: "We comply with the laws in each
     country where we operate. In general, governments that seek
     assistance in law enforcement or security investigations
     prohibit disclosure of the assistance provided."

On the other hand, Level 3 Communications already cooperates with the
NSA, and has the codename of LITTLE:

     The document identified for the first time which telecoms
     companies are working with GCHQ's "special source" team. It
     gives top secret codenames for each firm, with BT ("Remedy"),
     Verizon Business ("Dacron"), and Vodafone Cable ("Gerontic").
     The other firms include Global Crossing ("Pinnage"), Level 3
     ("Little"), Viatel ("Vitreous") and Interoute ("Streetcar").

Again, those code names should properly be in all caps.

When I write that the NSA has destroyed the fabric of trust on the
Internet, this is the kind of thing I mean.  Google can no longer trust
its bandwidth providers not to betray the company.

* The NSA's denial is pretty lame.  It feels as if it's hardly trying

* Finally, we need more encryption on the Internet.  We have made
surveillance too cheap, not just for the NSA but for all nation-state
adversaries.  We need to make it expensive again.


My September quote:

Level-3 statement:

The NSA's betrayal of the Internet or

The NSA's denial:
or or

Level-3's NSA code name:
Power User
Posts: 42527

« Reply #861 on: November 17, 2013, 12:55:34 AM »

third post

      Code Names for NSA Exploit Tools

This is from a Snowden document released by "Le Monde":

     General Term Descriptions:

     HIGHLANDS: Collection from Implants
     VAGRANT: Collection of Computer Screens
     MAGNETIC: Sensor Collection of Magnetic Emanations
     MINERALIZE: Collection from LAN Implant
     OCEAN: Optical Collection System for Raster-Based Computer
     LIFESAFER: Imaging of the Hard Drive
     GENIE: Multi-stage operation: jumping the airgap etc.
     BLACKHEART: Collection from an FBI Implant
     DROPMIRE: Passive collection of emanations using antenna
     CUSTOMS: Customs opportunities (not LIFESAVER)
     DROPMIRE: Laser printer collection, purely proximal access
       (***NOT*** implanted)
     DEWSWEEPER: USB (Universal Serial Bus) hardware host tap that
       provides COVERT link over US link into a target network.
       Operates w/RF relay subsystem to provide wireless Bridge into
       target network.
     RADON: Bi-directional host tap that can inject Ethernet packets
       onto the same targets.  Allows bi-directional exploitation of
       denied networks using standard on-net tools.

There's a lot to think about in this list.  RADON and DEWSWEEPER seem
particularly interesting. or

** *** ***** ******* *********** *************

      Defending Against Crypto Backdoors

We already know the NSA wants to eavesdrop on the Internet. It has
secret agreements with telcos to get direct access to bulk Internet
traffic. It has massive systems like TUMULT, TURMOIL, and TURBULENCE to
sift through it all. And it can identify ciphertext -- encrypted
information -- and figure out which programs could have created it.

But what the NSA wants is to be able to read that encrypted information
in as close to real-time as possible. It wants backdoors, just like the
cybercriminals and less benevolent governments do.

And we have to figure out how to make it harder for them, or anyone
else, to insert those backdoors.

How the NSA Gets Its Backdoors

The FBI tried to get backdoor access embedded in an AT&T secure
telephone system in the mid-1990s. The Clipper Chip included something
called a LEAF: a Law Enforcement Access Field. It was the key used to
encrypt the phone conversation, itself encrypted in a special key known
to the FBI, and it was transmitted along with the phone conversation. An
FBI eavesdropper could intercept the LEAF and decrypt it, then use the
data to eavesdrop on the phone call.

But the Clipper Chip faced severe backlash, and became defunct a few
years after being announced.

Having lost that public battle, the NSA decided to get its backdoors
through subterfuge: by asking nicely, pressuring, threatening, bribing,
or mandating through secret order. The general name for this program is

Defending against these attacks is difficult. We know from subliminal
channel and kleptography research that it's pretty much impossible to
guarantee that a complex piece of software isn't leaking secret
information. We know from Ken Thompson's famous talk on "trusting trust"
(first delivered in the ACM Turing Award Lectures) that you can never be
totally sure if there's a security flaw in your software.

Since BULLRUN became public last month, the security community has been
examining security flaws discovered over the past several years, looking
for signs of deliberate tampering. The Debian random number flaw was
probably not deliberate, but the 2003 Linux security vulnerability
probably was. The DUAL_EC_DRBG random number generator may or may not
have been a backdoor. The SSL 2.0 flaw was probably an honest mistake.
The GSM A5/1 encryption algorithm was almost certainly deliberately
weakened. All the common RSA moduli out there in the wild: we don't
know. Microsoft's _NSAKEY looks like a smoking gun, but honestly, we
don't know.

How the NSA Designs Backdoors

While a separate program that sends our data to some IP address
somewhere is certainly how any hacker -- from the lowliest script kiddie
up to the NSA -- spies on our computers, it's too labor-intensive to
work in the general case.

For government eavesdroppers like the NSA, subtlety is critical. In
particular, three characteristics are important:

* Low discoverability. The less the backdoor affects the normal
operations of the program, the better. Ideally, it shouldn't affect
functionality at all. The smaller the backdoor is, the better. Ideally,
it should just look like normal functional code. As a blatant example,
an email encryption backdoor that appends a plaintext copy to the
encrypted copy is much less desirable than a backdoor that reuses most
of the key bits in a public IV (initialization vector).

* High deniability. If discovered, the backdoor should look like a
mistake. It could be a single opcode change. Or maybe a "mistyped"
constant. Or "accidentally" reusing a single-use key multiple times.
This is the main reason I am skeptical about _NSAKEY as a deliberate
backdoor, and why so many people don't believe the DUAL_EC_DRBG backdoor
is real: they're both too obvious.

* Minimal conspiracy. The more people who know about the backdoor, the
more likely the secret is to get out. So any good backdoor should be
known to very few people. That's why the recently described potential
vulnerability in Intel's random number generator worries me so much; one
person could make this change during mask generation, and no one else
would know.

These characteristics imply several things:

* A closed-source system is safer to subvert, because an open-source
system comes with a greater risk of that subversion being discovered. On
the other hand, a big open-source system with a lot of developers and
sloppy version control is easier to subvert.

* If a software system only has to interoperate with itself, then it is
easier to subvert. For example, a closed VPN encryption system only has
to interoperate with other instances of that same proprietary system.
This is easier to subvert than an industry-wide VPN standard that has to
interoperate with equipment from other vendors.

* A commercial software system is easier to subvert, because the profit
motive provides a strong incentive for the company to go along with the
NSA's requests.

* Protocols developed by large open standards bodies are harder to
influence, because a lot of eyes are paying attention. Systems designed
by closed standards bodies are easier to influence, especially if the
people involved in the standards don't really understand security.

* Systems that send seemingly random information in the clear are easier
to subvert. One of the most effective ways of subverting a system is by
leaking key information -- recall the LEAF -- and modifying random
nonces or header information is the easiest way to do that.

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of
them is foolproof, but they are all useful. I'm sure there's more; this
list isn't meant to be exhaustive, nor the final word on the topic. It's
simply a starting place for discussion. But it won't work unless
customers start demanding software with this sort of transparency.

* Vendors should make their encryption code public, including the
protocol specifications. This will allow others to examine the code for
vulnerabilities. It's true we won't know for sure if the code we're
seeing is the code that's actually used in the application, but
surreptitious substitution is hard to do, forces the company to outright
lie, and increases the number of people required for the conspiracy to work.

* The community should create independent compatible versions of
encryption systems, to verify they are operating properly. I envision
companies paying for these independent versions, and universities
accepting this sort of work as good practice for their students. And
yes, I know this can be very hard in practice.

* There should be no master secrets. These are just too vulnerable.

* All random number generators should conform to published and accepted
standards. Breaking the random number generator is the easiest
difficult-to-detect method of subverting an encryption system. A
corollary: we need better published and accepted RNG standards.

* Encryption protocols should be designed so as not to leak any random
information. Nonces should be considered part of the key or public
predictable counters if possible. Again, the goal is to make it harder
to subtly leak key bits in this information.

This is a hard problem. We don't have any technical controls that
protect users from the authors of their software.

And the current state of software makes the problem even harder: Modern
apps chatter endlessly on the Internet, providing noise and cover for
covert communications. Feature bloat provides a greater "attack surface"
for anyone wanting to install a backdoor.

In general, what we need is assurance: methodologies for ensuring that a
piece of software does what it's supposed to do and nothing more.
Unfortunately, we're terrible at this. Even worse, there's not a lot of
practical research in this area -- and it's hurting us badly right now.

Yes, we need legal prohibitions against the NSA trying to subvert
authors and deliberately weaken cryptography. But this isn't just about
the NSA, and legal controls won't protect against those who don't follow
the law and ignore international agreements. We need to make their job
harder by increasing their risk of discovery. Against a risk-averse
adversary, it might be good enough.

This essay previously appeared on

The NSA's secret agreements:

Clipper Chip:

How the NSA get around encryption:


Subliminal channels:


Trusting trust:

Debian bug:

Linux backdoor:


SSL 2.0 flaw:

GSM A5/1 flaw:

Common RSA moduli:


NSA attacks Tor:

Possible Intel RNG backdoor:



I am looking for other examples of known or plausible instances of
intentional vulnerabilities for a paper I am writing on this topic.  If
you can think of an example, please post a description and reference in
the comments below.  Please explain why you think the vulnerability
could be intentional.  Thank you.
Power User
Posts: 42527

« Reply #862 on: November 17, 2013, 12:57:46 AM »

fourth post

      Why the Government Should Help Leakers

In the Information Age, it's easier than ever to steal and publish data.
Corporations and governments have to adjust to their secrets being
exposed, regularly.

When massive amounts of government documents are leaked, journalists
sift through them to determine which pieces of information are
newsworthy, and confer with government agencies over what needs to be

Managing this reality is going to require that governments actively
engage with members of the press who receive leaked secrets, helping
them secure those secrets -- even while being unable to prevent them
from publishing. It might seem abhorrent to help those who are seeking
to bring your secrets to light, but it's the best way to ensure that the
things that truly need to be secret remain secret, even as everything
else becomes public.

The WikiLeaks cables serve as an excellent example of how a government
should not deal with massive leaks of classified information.

WikiLeaks has said it asked US authorities for help in determining what
should be redacted before publication of documents, although some
government officials have challenged that statement. WikiLeaks' media
partners did redact many documents, but eventually all 250,000
unredacted cables were released  to the world as a result of a mistake.

The damage was nowhere near as serious as government officials initially
claimed, but it had been avoidable.

Fast-forward to today, and we have an even bigger trove of classified
documents. What Edward Snowden took -- "exfiltrated" is the National
Security Agency term -- dwarfs the State Department cables, and contains
considerably more important secrets. But again, the US government is
doing nothing to prevent a massive data dump.

The government engages with the press on individual stories. The
"Guardian," the "Washington Post," and the "New York Times" are all
redacting the original Snowden documents based on discussions with the
government. This isn't new. The US press regularly consults with the
government before publishing something that might be damaging. In 2006,
the "New York Times" consulted with both the NSA and the Bush
administration before publishing Mark Klein's whistleblowing about the
NSA's eavesdropping on AT&T trunk circuits. In all these cases, the goal
is to minimize actual harm to US security while ensuring the press can
still report stories in the public interest, even if the government
doesn't want it to.

In today's world of reduced secrecy, whistleblowing as civil
disobedience, and massive document exfiltrations, negotiations over
individual stories aren't enough. The government needs to develop a
protocol to actively help news organizations expose their secrets safely
and responsibly.

Here's what should have happened as soon as Snowden's whistleblowing
became public. The government should have told the reporters and
publications with the classified documents something like this: "OK, you
have them. We know that we can't undo the leak. But please let us help.
Let us help you secure the documents as you write your stories, and
securely dispose of the documents when you're done."

The people who have access to the Snowden documents say they don't want
them to be made public in their raw form or to get in the hands of rival
governments. But accidents happen, and reporters are not trained in
military secrecy practices.

Copies of some of the Snowden documents are being circulated to
journalists and others. With each copy, each person, each day, there's a
greater chance that, once again, someone will make a mistake and some --
or all -- of the raw documents will appear on the Internet. A formal
system of working with whistleblowers could prevent that.

I'm sure the suggestion sounds odious to a government that is actively
engaging in a war on whistleblowers, and that views Snowden as a
criminal and the reporters writing these stories as "helping the
terrorists." But it makes sense. Harvard law professor Jonathan Zittrain
compares this to plea bargaining.

The police regularly negotiate lenient sentences or probation for
confessed criminals in order to convict more important criminals. They
make deals with all sorts of unsavory people, giving them benefits they
don't deserve, because the result is a greater good.

In the Snowden case, an agreement would safeguard the most important of
NSA's secrets from other nations' intelligence agencies. It would help
ensure that the truly secret information not be exposed. It would
protect US interests.

Why would reporters agree to this? Two reasons. One, they actually do
want these documents secured while they look for stories to publish. And
two, it would be a public demonstration of that desire.

Why wouldn't the government just collect all the documents under the
pretense of securing them and then delete them? For the same reason they
don't renege on plea bargains: No one would trust them next time. And,
of course, because smart reporters will probably keep encrypted backups
under their own control.

We're nowhere near the point where this system could be put into
practice, but it's worth thinking about how it could work. The
government would need to establish a semi-independent group, called,
say, a Leak Management unit, which could act as an intermediary. Since
it would be isolated from the agencies that were the source of the leak,
its officials would be less vested and -- this is important -- less
angry over the leak. Over time, it would build a reputation, develop
protocols that reporters could rely on. Leaks will be more common in the
future, but they'll still be rare. Expecting each agency to develop
expertise in this process is unrealistic.

If there were sufficient trust between the press and the government,
this could work. And everyone would benefit.

This essay previously appeared on

WikiLeaks story:

Mark Klein story:

The world of reduced secrecy:

Whistleblowing as civil disobedience:

Software to facilitate massive document exfiltrations:

** *** ***** ******* *********** *************

      NSA/Snowden News

Jack Goldsmith argues that we need the NSA to surveil the Internet not
for terrorism reasons, but for cyberespionage and cybercrime reasons.

Daniel Gallington argues -- the headline has nothing to do with the
content -- that the balance between surveillance and privacy is about

Good summary from the "London Review of Books" on what the NSA can and
cannot do.

"A Template for Reporting Government Surveillance News Stories."  This
is from 2006, but it's even more true today.
We've changed administrations -- we've changed political parties -- but
nothing has changed.

There's a story that Edward Snowden successfully socially engineered
other NSA employees into giving him their passwords.

This talk by Dan Geer explains the NSA mindset of "collect everything":
The whole essay is well worth reading.

This "New York Times" story on the NSA is very good, and contains lots
of little tidbits of new information gleaned from the Snowden documents.
  "The agency's Dishfire database -- nothing happens without a code word
at the N.S.A. -- stores years of text messages from around the world,
just in case. Its Tracfin collection accumulates gigabytes of credit
card purchases. The fellow pretending to send a text message at an
Internet cafe in Jordan may be using an N.S.A. technique code-named
Polarbreeze to tap into nearby computers. The Russian businessman who is
socially active on the web might just become food for Snacks, the
acronym-mad agency's Social Network Analysis Collaboration Knowledge
Services, which figures out the personnel hierarchies of organizations
from texts.
This "Guardian" story is related.  It looks like both the "New York
Times" and the "Guardian" wrote separate stories about the same source
"New York Times" reporter Scott Shane gave a 20-minute interview on
"Democracy Now" on the NSA and his reporting.

"Der Spiegel" is reporting that the GCHQ used QUANTUMINSERT to direct
users to fake LinkedIn and Slashdot pages run by -- this code name is
not in the article -- FOXACID servers.  There's not a lot technically
new in the article, but we do get some information about popularity and
Slashdot has reacted to the story.
I wrote about QUANTUMINSERT, and the whole infection process, here.

** *** ***** ******* *********** *************

      The Trajectories of Government and Corporate Surveillance

Historically, surveillance was difficult and expensive.

Over the decades, as technology advanced, surveillance became easier and
easier. Today, we find ourselves in a world of ubiquitous surveillance,
where everything is collected, saved, searched, correlated and analyzed.

But while technology allowed for an increase in both corporate and
government surveillance, the private and public sectors took very
different paths to get there. The former always collected information
about everyone, but over time, collected more and more of it, while the
latter always collected maximal information, but over time, collected it
on more and more people.

Corporate surveillance has been on a path from minimal to maximal
information. Corporations always collected information on everyone they
could, but in the past they didn't collect very much of it and only held
it as long as necessary. When surveillance information was expensive to
collect and store, companies made do with as little as possible.

Telephone companies collected long-distance calling information because
they needed it for billing purposes. Credit cards collected only the
information about their customers' transactions that they needed for
billing. Stores hardly ever collected information about their customers,
maybe some personal preferences, or name-and-address for advertising
purposes. Even Google, back in the beginning, collected far less
information about its users than it does today.

As technology improved, corporations were able to collect more. As the
cost of data storage became cheaper, they were able to save more data
and for a longer time. And as big data analysis tools became more
powerful, it became profitable to save more. Today, almost everything is
being saved by someone -- probably forever.

Examples are everywhere. Internet companies like Google, Facebook,
Amazon and Apple collect everything we do online at their sites.
Third-party cookies allow those companies, and others, to collect data
on us wherever we are on the Internet. Store affinity cards allow
merchants to track our purchases. CCTV and aerial surveillance combined
with automatic face recognition allow companies to track our movements;
so does your cell phone. The Internet will facilitate even more
surveillance, by more corporations for more purposes.

On the government side, surveillance has been on a path from
individually targeted to broadly collected. When surveillance was manual
and expensive, it could only be justified in extreme cases. The warrant
process limited police surveillance, and resource restraints and the
risk of discovery limited national intelligence surveillance. Specific
individuals were targeted for surveillance, and maximal information was
collected on them alone.

As technology improved, the government was able to implement
ever-broadening surveillance. The National Security Agency could surveil
groups -- the Soviet government, the Chinese diplomatic corps, etc. --
not just individuals. Eventually, they could spy on entire
communications trunks.

Now, instead of watching one person, the NSA can monitor "three hops"
away from that person -- an ever widening network of people not directly
connected to the person under surveillance. Using sophisticated tools,
the NSA can surveil broad swaths of the Internet and phone network.

Governments have always used their authority to piggyback on corporate
surveillance. Why should they go through the trouble of developing their
own surveillance programs when they could just ask corporations for the
data? For example we just learned that the NSA collects e-mail, IM and
social networking contact lists for millions of Internet users worldwide.

But as corporations started collecting more information on populations,
governments started demanding that data. Through National Security
Letters, the FBI can surveil huge groups of people without obtaining a
warrant. Through secret agreements, the NSA can monitor the entire
Internet and telephone networks.

This is a huge part of the public-private surveillance partnership.

The result of all this is we're now living in a world where both
corporations and governments have us all under pretty much constant

Data is a byproduct of the information society. Every interaction we
have with a computer creates a transaction record, and we interact with
computers hundreds of times a day. Even if we don't use a computer --
buying something in person with cash, say -- the merchant uses a
computer, and the data flows into the same system. Everything we do
leaves a data shadow, and that shadow is constantly under surveillance.

Data is also a byproduct of information society socialization, whether
it be e-mail, instant messages or conversations on Facebook.
Conversations that used to be ephemeral are now recorded, and we are all
leaving digital footprints wherever we go.

Moore's law has made computing cheaper. All of us have made computing
ubiquitous. And because computing produces data, and that data equals
surveillance, we have created a world of ubiquitous surveillance.

Now we need to figure out what to do about it. This is more than reining
in the NSA or fining a corporation for the occasional data abuse. We
need to decide whether our data is a shared societal resource, a part of
us that is inherently ours by right, or a private good to be bought and

Writing in the "Guardian," Chris Huhn said that "information is power,
and the necessary corollary is that privacy is freedom." How this
interplay between power and freedom play out in the information age is
still to be determined.

This essay previously appeared on

Ubiquitous surveillance:

Three hop analysis:

The public-private surveillance partnership:

Chris Huhn's comment:

Richard Stallman's comments on the subject:

** *** ***** ******* *********** *************

      A Fraying of the Public/Private Surveillance Partnership

The public/private surveillance partnership between the NSA and
corporate data collectors is starting to fray. The reason is sunlight.
The publicity resulting from the Snowden documents has made companies
think twice before allowing the NSA access to their users' and
customers' data.

Pre-Snowden, there was no downside to cooperating with the NSA. If the
NSA asked you for copies of all your Internet traffic, or to put
backdoors into your security software, you could assume that your
cooperation would forever remain secret. To be fair, not every
corporation cooperated willingly. Some fought in court. But it seems
that a lot of them, telcos and backbone providers especially, were happy
to give the NSA unfettered access to everything. Post-Snowden, this is
changing. Now that many companies' cooperation has become public,
they're facing a PR backlash from customers and users who are upset that
their data is flowing to the NSA. And this is costing those companies

How much is unclear. In July, right after the PRISM revelations, the
Cloud Security Alliance reported that US cloud companies could lose $35
billion over the next three years, mostly due to losses of foreign
sales. Surely that number has increased as outrage over NSA spying
continues to build in Europe and elsewhere. There is no similar report
for software sales, although I have attended private meetings where
several large US software companies complained about the loss of foreign
sales. On the hardware side, IBM is losing business in China. The US
telecom companies are also suffering: AT&T is losing business worldwide.

This is the new reality. The rules of secrecy are different, and
companies have to assume that their responses to NSA data demands will
become public. This means there is now a significant cost to
cooperating, and a corresponding benefit to fighting.

Over the past few months, more companies have woken up to the fact that
the NSA is basically treating them as adversaries, and are responding as
such. In mid-October, it became public that the NSA was collecting
e-mail address books and buddy lists from Internet users logging into
different service providers. Yahoo, which didn't encrypt those user
connections by default, allowed the NSA to collect much more of its data
than Google, which did. That same day, Yahoo announced that it would
implement SSL encryption by default for all of its users. Two weeks
later, when it became public that the NSA was collecting data on Google
users by eavesdropping on the company's trunk connections between its
data centers, Google announced that it would encrypt those connections.

We recently learned that Yahoo fought a government order to turn over
data. Lavabit fought its order as well. Apple is now tweaking the
government. And we think better of those companies because of it.

Now Lavabit, which closed down its e-mail service rather than comply
with the NSA's request for the master keys that would compromise all of
its customers, has teamed with Silent Circle to develop a secure e-mail
standard that is resistant to these kinds of tactics.

The Snowden documents made it clear how much the NSA relies on
corporations to eavesdrop on the Internet. The NSA didn't build a
massive Internet eavesdropping system from scratch. It noticed that the
corporate world was already eavesdropping on every Internet user --
surveillance is the business model of the Internet, after all -- and
simply got copies for itself.

Now, that secret ecosystem is breaking down.  Supreme Court Justice
Louis Brandeis wrote about transparency, saying "Sunlight is said to be
the best of disinfectants." In this case, it seems to be working.

These developments will only help security. Remember that while Edward
Snowden has given us a window into the NSA's activities, these sorts of
tactics are probably also used by other intelligence services around the
world. And today's secret NSA programs become tomorrow's PhD theses, and
the next day's criminal hacker tools. It's impossible to build an
Internet where the good guys can eavesdrop, and the bad guys cannot. We
have a choice between an Internet that is vulnerable to all attackers,
or an Internet that is safe from all attackers. And a safe and secure
Internet is in everyone's best interests, including the US's.

This essay previously appeared on

The public/private surveillance partnership: or


Increased outrage outside the US:

Losses due to NSA spying:

New rules of secrecy:

The NSA and tech companies as adversaries:

Yahoo announce3s SSL by default:


Silent Circle's new e-mail system:

Brandeis quote:

** *** ***** ******* *********** *************

      Book Review: "Cyber War Will Not Take Place"

Cyber war is possibly the most dangerous buzzword of the Internet era.
The fear-inducing rhetoric surrounding it is being used to justify major
changes in the way the Internet is organized, governed, and constructed.
And in "Cyber War Will Not Take Place," Thomas Rid convincingly argues
that cyber war is not a compelling threat. Rid is one of the leading
cyber war skeptics in Europe, and although he doesn't argue that war
won't extend into cyberspace, he says that cyberspace's role in war is
more limited than doomsayers want us to believe. His argument against
cyber war is lucid and methodical. He divides "offensive and violent
political acts" in cyberspace into: sabotage, espionage, and subversion.
These categories are larger than cyberspace, of course, but Rid spends
considerable time analyzing their strengths and limitations within
cyberspace. The details are complicated, but his end conclusion is that
many of these types of attacks cannot be defined as acts of war, and any
future war won't involve many of these types of attacks.

None of this is meant to imply that cyberspace is safe. Threats of all
sorts fill cyberspace, but not threats of war. As such, the policies to
defend against them are different. While hackers and criminal threats
get all the headlines, more worrisome are the threats from governments
seeking to consolidate their power. I have long argued that controlling
the Internet has become critical for totalitarian states, and their four
broad tools of surveillance, censorship, propaganda and use control have
legitimate commercial applications, and are also employed by democracies.

A lot of the problem here is of definition. There isn't broad agreement
as to what constitutes cyber war, and this confusion plays into the
hands of those hyping its threat. If everything from Chinese espionage
to Russian criminal extortion to activist disruption falls under the
cyber war umbrella, then it only makes sense to put more of the Internet
under government -- and thus military -- control. Rid's book is a
compelling counter-argument to this approach.

Rid's final chapter is an essay unto itself, and lays out his vision as
to how we should deal with threats in cyberspace. For policymakers who
won't sit through an entire book, this is the chapter I would urge them
to read. Arms races are dangerous and destabilizing, and we're in the
early years of a cyberwar arms race that's being fueled by fear and
ignorance. This book is a cogent counterpoint to the doomsayers and the
profiteers, and should be required reading for anyone concerned about
security in cyberspace.

This book review previously appeared in Europe's World.

Thomas Rid, "Cyber War Will Not Take Place," Oxford University Press, 2013.

** *** ***** ******* *********** *************

      Understanding the Threats in Cyberspace

The primary difficulty of cyber security isn't technology -- it's
policy.  The Internet mirrors real-world society, which makes security
policy online as complicated as it is in the real world. Protecting
critical infrastructure against cyber-attack is just one of cyberspace's
many security challenges, so it's important to understand them all
before any one of them can be solved.

The list of bad actors in cyberspace is long, and spans a wide range of
motives and capabilities. At the extreme end there's cyberwar:
destructive actions by governments during a war. When government
policymakers like David Omand think of cyber-attacks, that's what comes
to mind. Cyberwar is conducted by capable and well-funded groups and
involves military operations against both military and civilian targets.
Along much the same lines are non-nation state actors who conduct
terrorist operations. Although less capable and well-funded, they are
often talked about in the same breath as true cyberwar.

Much more common are the domestic and international criminals who run
the gamut from lone individuals to organized crime. They can be very
capable and well-funded and will continue to inflict significant
economic damage.

Threats from peacetime governments have been seen increasingly in the
news. The US worries about Chinese espionage against Western targets,
and we're also seeing US surveillance of pretty much everyone in the
world, including Americans inside the US. The National Security Agency
(NSA) is probably the most capable and well-funded espionage
organization in the world, and we're still learning about the full
extent of its sometimes illegal operations.

Hacktivists are a different threat. Their actions range from
Internet-age acts of civil disobedience to the inflicting of actual
damage. This is hard to generalize about because the individuals and
groups in this category vary so much in skill, funding and motivation.
Hackers falling under the "anonymous" aegis -- it really isn't correct
to call them a group -- come under this category, as does WikiLeaks.
Most of these attackers are outside the organization, although
whistleblowing -- the civil disobedience of the information age --
generally involves insiders like Edward Snowden.

This list of potential network attackers isn't exhaustive. Depending on
who you are and what your organization does, you might be also concerned
with espionage cyber-attacks by the media, rival corporations or even
the corporations we entrust with our data.

The issue here, and why it affects policy, is that protecting against
these various threats can lead to contradictory requirements. In the US,
the NSA's post-9/11 mission to protect the country from terrorists has
transformed it into a domestic surveillance organization. The NSA's need
to protect its own information systems from outside attack opened it up
to attacks from within. Do the corporate security products we buy to
protect ourselves against cybercrime contain backdoors that allow for
government spying? European countries may condemn the US for spying on
its own citizens, but do they do the same thing?

All these questions are especially difficult because military and
security organizations along with corporations tend to hype particular
threats. For example, cyberwar and cyberterrorism are greatly overblown
as threats -- because they result in massive government programs with
huge budgets and power -- while cybercrime is largely downplayed.

We need greater transparency, oversight and accountability on both the
government and corporate sides before we can move forward. With the
secrecy that surrounds cyber-attack and cyberdefense it's hard to be

This essay previously appeared in "Europe's World."

** *** ***** ******* *********** *************


Ed Felten makes a  strong argument that a court order is exactly the
same thing as an insider attack:
This is why designing Lavabit to be resistant to court order would have
been the right thing to do, and why we should all demand systems that
are designed in this way.

There seems to be a bunch of research into uniquely identifying cell
phones through unique analog characteristics of the various embedded
sensors.  These sorts of things could replace cookies as surveillance tools. or

Several versions of D-Link router firmware contain a backdoor.  Just set
the browser's user agent string to "xmlset_roodkcableoj28840ybtide," and
you're in.  (Hint, remove the number and read it backwards.)  It was
probably put there for debugging purposes, but has all sorts of
applications for surveillance.
There are open-source programs available to replace the firmware:

The new iPhone has a motion sensor chip, and that opens up new
opportunities for surveillance.

Slashdot asks whether I can be trusted:

DARPA is looking for a fully automated network defense system, and has a

Cognitive biases about violence as a negotiating tactic: interesting paper.

This article talks about applications of close-in surveillance using
your phone's Wi-Fi in retail, but the possibilities are endless.
Basically, the system is using the MAC address to identify individual
devices.  Another article on the system is here.

Good story of badBIOS, a really nasty piece of malware.  The weirdest
part is how it uses ultrasonic sound to jump air gaps.
I'm not sure what to make of this.  When I first read it, I thought it
was a hoax.  But enough others are taking it seriously that I think it's
a real story.  I don't know whether the facts are real, and I haven't
seen anything about what this malware actually does.
A debunking:

This story of the bomb squad at the Boston marathon interesting reading,
but I'm left wanting more.  What are the lessons here?  How can we do
this better next time?  Clearly we won't be able to anticipate bombings;
even Israel can't do that.  We have to get better at responding.

Here's a demonstration of the US government's capabilities to monitor
the public Internet.  Former CIA and NSA Director Michael Hayden was on
the Acela train between New York and Washington DC, taking press
interviews on the phone.  Someone nearby overheard the conversation, and
started tweeting about it.  Within 15 or so minutes, someone somewhere
noticed the tweets, and informed someone who knew Hayden.  That person
called Hayden on his cell phone and, presumably, told him to shut up.
Nothing covert here; the tweets were public.
I don't think this was a result of the NSA monitoring the Internet.  I
think this was some public relations office -- probably the one that is
helping General Alexander respond to all the Snowden stories -- who is
searching the public Twitter feed for, among other things, Hayden's
name.  Even so: wow.

This elliptic-curve crypto primer is well-written and very good.

The wings of the *Goniurellia tridens* fruit fly have images of an ant
on them, to deceive predators:  "When threatened, the fly flashes its
wings to give the appearance of ants walking back and forth. The
predator gets confused and the fly zips off."

Interesting article on risk-based authentication.  I like the idea of
giving each individual login attempt a risk score, based on the
characteristics of the attempt.

This bizarre essay argues that online gambling is a strategic national
threat because terrorists could use it to launder money.
I'm impressed with the massive fear resonating.

Adobe lost 150 million customer passwords.  Even worse, it had a pretty
dumb cryptographic hash system protecting those passwords.

Microsoft has announced plans to retire SHA-1 by 2016. I think this is a
good move.

** *** ***** ******* *********** *************


SecureDrop is an open-source whistleblower support system, originally
written by Aaron Swartz and now run by the Freedom of the Press
Foundation.  The first instance of this system was named StrongBox and
is being run by "The New Yorker."  To further add to the naming
confusion, Aaron Swartz called the system DeadDrop when he wrote the code.

I participated in a detailed security audit of the StrongBox
implementation, along with some great researchers from the University of
Washington and Jake Applebaum.  The problems we found were largely
procedural, and things that the Freedom of the Press Foundation are
working to fix.

Freedom of the Press Foundation is not running any instances of
SecureDrop.  It has about a half dozen major news organization lined up,
and will be helping them install their own starting the first week of
November.  So hopefully any would-be whistleblowers will soon have their
choice of news organizations to securely communicate with.

Strong technical whistleblower protection is essential, especially given
President Obama's war on whistleblowers. I hope this system is broadly
implemented and extensively used.




Our security audit:

Obama's war on whistleblowers:

The US government sets up secure indoor tents for the president and
other officials to deal with classified material while traveling abroad.

** *** ***** ******* *********** *************

      Dry Ice Bombs at LAX

The news story about the guy who left dry ice bombs in restricted areas
of LAX is really weird.

I can't get worked up over it, though.  Dry ice bombs are a harmless
prank.  I set off a bunch of them when I was in college, although I used
liquid nitrogen, because I was impatient -- and they're harmless.  I
know of someone who set a few off over the summer, just for fun.  They
do make a very satisfying boom.

Having them set off in a secure airport area doesn't illustrate any new
vulnerabilities.  We already know that trusted people can subvert
security systems.  So what?

I've done a bunch of press interviews on this.  One radio announcer
really didn't like my nonchalance.  He really wanted me to complain
about the lack of cameras at LAX, and was unhappy when I pointed out
that we didn't need cameras to catch this guy.

I like my kicker quote in this article:

     Various people, including former Los Angeles Police Chief
     William Bratton, have called LAX the No. 1 terrorist target on
     the West Coast. But while an Algerian man discovered with a
     bomb at the Canadian border in 1999 was sentenced to 37 years
     in prison in connection with a plot to cause damage at LAX,
     Schneier said that assessment by Bratton is probably not true.

     "Where can you possibly get that data?" he said. "I don't think
     terrorists respond to opinion polls about how juicy targets

** *** ***** ******* *********** *************

      Schneier News

In Spring semester, I'm running a reading group -- which seems to be a
formal variant of a study group -- at Harvard Law School on "Security,
Power, and the Internet.  I would like a good mix of people, so non law
students and non Harvard students are both welcome to sign up.

Various security articles about me (or with good quotes by me):
or or

My talk at the IETF Vancouver meeting on NSA and surveillance:

Press articles about me and the IEFT meeting:

Other video interviews:

** *** ***** ******* *********** *************

      The Battle for Power on the Internet

We're in the middle of an epic battle for power in cyberspace. On one
side are the traditional, organized, institutional powers such as
governments and large multinational corporations. On the other are the
distributed and nimble: grassroots movements, dissident groups, hackers,
and criminals. Initially, the Internet empowered the second side. It
gave them a place to coordinate and communicate efficiently, and made
them seem unbeatable. But now, the more traditional institutional powers
are winning, and winning big. How these two sides fare in the long term,
and the fate of the rest of us who don't fall into either group, is an
open question -- and one vitally important to the future of the Internet.

In the Internet's early days, there was a lot of talk about its "natural
laws" -- how it would upend traditional power blocks, empower the
masses, and spread freedom throughout the world. The international
nature of the Internet circumvented national laws. Anonymity was easy.
Censorship was impossible. Police were clueless about cybercrime. And
bigger changes seemed inevitable. Digital cash would undermine national
sovereignty. Citizen journalism would topple traditional media,
corporate PR, and political parties. Easy digital copying would destroy
the traditional movie and music industries. Web marketing would allow
even the smallest companies to compete against corporate giants. It
really would be a new world order.

This was a utopian vision, but some of it did come to pass. Internet
marketing has transformed commerce. The entertainment industries have
been transformed by things like MySpace and YouTube, and are now more
open to outsiders. Mass media has changed dramatically, and some of the
most influential people in the media have come from the blogging world.
There are new ways to organize politically and run elections.
Crowdfunding has made tens of thousands of projects possible to finance,
and crowdsourcing made more types of projects possible. Facebook and
Twitter really did help topple governments.

But that is just one side of the Internet's disruptive character. The
Internet has emboldened traditional power as well.

On the corporate side, power is consolidating, a result of two current
trends in computing. First, the rise of cloud computing means that we no
longer have control of our data. Our e-mail, photos, calendars, address
books, messages, and documents are on servers belonging to Google,
Apple, Microsoft, Facebook, and so on. And second, we are increasingly
accessing our data using devices that we have much less control over:
iPhones, iPads, Android phones, Kindles, ChromeBooks, and so on. Unlike
traditional operating systems, those devices are controlled much more
tightly by the vendors, who limit what software can run, what they can
do, how they're updated, and so on. Even Windows 8 and Apple's Mountain
Lion operating system are heading in the direction of more vendor control.

I have previously characterized this model of computing as "feudal."
Users pledge their allegiance to more powerful companies who, in turn,
promise to protect them from both sysadmin duties and security threats.
It's a metaphor that's rich in history and in fiction, and a model
that's increasingly permeating computing today.

Medieval feudalism was a hierarchical political system, with obligations
in both directions. Lords offered protection, and vassals offered
service. The lord-peasant relationship was similar, with a much greater
power differential. It was a response to a dangerous world.

Feudal security consolidates power in the hands of the few. Internet
companies, like lords before them, act in their own self-interest. They
use their relationship with us to increase their profits, sometimes at
our expense. They act arbitrarily. They make mistakes. They're
deliberately -- and incidentally -- changing social norms. Medieval
feudalism gave the lords vast powers over the landless peasants; we're
seeing the same thing on the Internet.

It's not all bad, of course. We, especially those of us who are not
technical, like the convenience, redundancy, portability, automation,
and shareability of vendor-managed devices. We like cloud backup. We
like automatic updates. We like not having to deal with security
ourselves. We like that Facebook just works -- from any device, anywhere.

Government power is also increasing on the Internet. There is more
government surveillance than ever before. There is more government
censorship than ever before. There is more government propaganda, and an
increasing number of governments are controlling what their users can
and cannot do on the Internet. Totalitarian governments are embracing a
growing "cyber sovereignty" movement to further consolidate their power.
And the cyberwar arms race is on, pumping an enormous amount of money
into cyber-weapons and consolidated cyber-defenses, further increasing
government power.

In many cases, the interests of corporate and government powers are
aligning. Both corporations and governments benefit from ubiquitous
surveillance, and the NSA is using Google, Facebook, Verizon, and others
to get access to data it couldn't otherwise. The entertainment industry
is looking to governments to enforce its antiquated business models.
Commercial security equipment from companies like BlueCoat and Sophos is
being used by oppressive governments to surveil and censor their
citizens. The same facial recognition technology that Disney uses in its
theme parks can also identify protesters in China and Occupy Wall Street
activists in New York. Think of it as a public/private surveillance

What happened? How, in those early Internet years, did we get the future
so wrong?

The truth is that technology magnifies power in general, but rates of
adoption are different. The unorganized, the distributed, the marginal,
the dissidents, the powerless, the criminal: they can make use of new
technologies very quickly. And when those groups discovered the
Internet, suddenly they had power. But later, when the already-powerful
big institutions finally figured out how to harness the Internet, they
had more power to magnify. That's the difference: the distributed were
more nimble and were faster to make use of their new power, while the
institutional were slower but were able to use their power more effectively.

So while the Syrian dissidents used Facebook to organize, the Syrian
government used Facebook to identify dissidents to arrest.

All isn't lost for distributed power, though. For institutional power,
the Internet is a change in degree, but for distributed power, it's a
qualitative one. The Internet gives decentralized groups -- for the
first time -- the ability to coordinate. This can have incredible
ramifications, as we saw in the SOPA/PIPA debate, Gezi, Brazil, and the
rising use of crowdfunding. It can invert power dynamics, even in the
presence of surveillance, censorship, and use control. But aside from
political coordination, the Internet allows for social coordination as
well -- to unite, for example, ethnic diasporas, gender minorities,
sufferers of rare diseases, and people with obscure interests.

This isn't static: Technological advances continue to provide advantage
to the nimble. I discussed this trend in my book "Liars and Outliers."
If you think of security as an arms race between attackers and
defenders, any technological advance gives one side or the other a
temporary advantage. But most of the time, a new technology benefits the
nimble first. They are not hindered by bureaucracy -- and sometimes not
by laws or ethics, either. They can evolve faster.

We saw it with the Internet. As soon as the Internet started being used
for commerce, a new breed of cybercriminal emerged, immediately able to
take advantage of the new technology. It took police a decade to catch
up. And we saw it on social media, as political dissidents made use of
its organizational powers before totalitarian regimes did.

This delay is what I call a "security gap." It's greater when there's
more technology, and in times of rapid technological change. Basically,
if there are more innovations to exploit, there will be more damage
resulting from society's inability to keep up with exploiters of all of
them. And since our world is one in which there's more technology than
ever before, and a faster rate of technological change than ever before,
we should expect to see a  greater security gap than ever before. In
other words, there will be an increasing time period during which nimble
distributed powers can make use of new technologies before slow
institutional powers can make better use of those technologies.

This is the battle: quick vs. strong. To return to medieval metaphors,
you can think of a nimble distributed power -- whether marginal,
dissident, or criminal -- as Robin Hood; and ponderous institutional
powers -- both government and corporate -- as the feudal lords.

So who wins? Which type of power dominates in the coming decades?

Right now, it looks like traditional power. Ubiquitous surveillance
means that it's easier for the government to identify dissidents than it
is for the dissidents to remain anonymous. Data monitoring means easier
for the Great Firewall of China to block data than it is for people to
circumvent it. The way we all use the Internet makes it much easier for
the NSA to spy on everyone than it is for anyone to maintain privacy.
And even though it is easy to circumvent digital copy protection, most
users still can't do it.

The problem is that leveraging Internet power requires technical
expertise. Those with sufficient ability will be able to stay ahead of
institutional powers. Whether it's setting up your own e-mail server,
effectively using encryption and anonymity tools, or breaking copy
protection, there will always be technologies that can evade
institutional powers. This is why cybercrime is still pervasive, even as
police savvy increases; why technically capable whistleblowers can do so
much damage; and why organizations like Anonymous are still a viable
social and political force. Assuming technology continues to advance --
and there's no reason to believe it won't -- there will always be a
security gap in which technically advanced Robin Hoods can operate.

Most people, though, are stuck in the middle. These are people who don't
have the technical ability to evade large governments and corporations,
avoid the criminal and hacker groups who prey on us, or join any
resistance or dissident movements. These are the people who accept
default configuration options, arbitrary terms of service, NSA-installed
backdoors, and the occasional complete loss of their data. These are the
people who get increasingly isolated as government and corporate power
align. In the feudal world, these are the hapless peasants. And it's
even worse when the feudal lords -- or any powers -- fight each other.
As anyone watching "Game of Thrones" knows, peasants get trampled when
powers fight: when Facebook, Google, Apple, and Amazon fight it out in
the market; when the US, EU, China, and Russia fight it out in
geopolitics; or when it's the US vs. "the terrorists" or China vs. its

The abuse will only get worse as technology continues to advance. In the
battle between institutional power and distributed power, more
technology means more damage. We've already seen this: Cybercriminals
can rob more people more quickly than criminals who have to physically
visit everyone they rob. Digital pirates can make more copies of more
things much more quickly than their analog forebears. And we'll see it
in the future: 3D printers mean that the computer restriction debate
will soon involves guns, not movies. Big data will mean that more
companies will be able to identify and track you more easily. It's the
same problem as the "weapons of mass destruction" fear: terroris
Power User
Posts: 42527

« Reply #863 on: November 27, 2013, 07:23:11 PM »

A bit of background context:

A DUI Checkpoint
Power User
Posts: 9482

« Reply #864 on: November 29, 2013, 11:43:35 AM »

Vladimir Lenin, the founder of the Soviet state and godfather of modern totalitarian politics, once explained the totalitarian worldview this way:

"We recognize nothing private."

The article goes on to discuss China, but is applicable IMO to Obamacare and big government intrusions here as well.
Power User
Posts: 42527

« Reply #865 on: November 29, 2013, 04:48:13 PM »
Power User
Posts: 7838

« Reply #866 on: December 02, 2013, 08:07:41 AM »

I guess it could be either electronic surveillance or possibly simply bribing insiders.   Maybe both.
Probably not new.  Just more obvious now.

*****Texans Allege ‘Fishy’ Adjustments By The Patriots

WILL GRUBB, Sports Radio 610

December 1, 2013 5:06 PM

Houston (CBS Houston) - The Texans defense struggled against Tom Brady and the Patriots. But then again, who doesn’t?

After the Texans 34-31 loss, defensive end Antonio Smith made it clear he thought Brady had a little extra help in carving up their defense to the tune of 365 passing yards.

“Either teams are spying on us or scouting us,” Smith said.

The nine-year veteran says the Texans added a new defensive wrinkle this week but the Patriots ’miraculously’  knew it was coming.

“It was just miraculous that they changed up some things that they did on offense that keyed on what we put in this week,” Smith said. “There’s no way. We have not did it ever (sic) before and they ain’t never changed it ever before so it was just kind of fishy.”

In 2007 the Patriots were involved in a scandal commonly know as ‘Spy Gate’ where Bill Belichick was fined $500,000 and the team had to forfeit a first-round draft pick for secretly taping coaches and walkthroughs.

“(Brady) knew what we were doing.” linebacker Joe Mays said.

“It is a specific thing that was important to what we were going to do today that they did all year.” Smith added.

The Patriots deciding to spy on a 2-9 team a week after their biggest win of the season seems like a stretch. But if the allegations are proven true, it would certainly be a major scandal for a Patriots team gearing up for a playoff run.

Get in contact with Will Grubb on Twitter – @WillGrubbRadio – or on Facebook – Will Grubb.*****
Power User
Posts: 42527

« Reply #867 on: December 02, 2013, 11:58:34 AM »


Smile, You're on Candid Webcam
A YouTube stunt shows how easy it is to collect personal information from social-media posts.
By L. Gordon Crovitz
Dec. 1, 2013 6:34 p.m. ET

More than one billion people now use social media around the world, but users are still figuring out how much privacy they are willing to trade for being able to share with their friends—and sometimes with strangers.

Comic Jack Vale, who has a channel on YouTube featuring hidden-camera spoofs, recently conducted what he called a "social media experiment prank." He went to a shopping district in Irvine, Calif., and searched social media services to see who was nearby and what he could learn about them.

"I wanted to see how easy it would be to get personal information from complete strangers," he explains in the video's introduction, "and while I'm at it, of course, freak 'em out a little bit. Keep in mind when you watch this video, I got all of this information just by searching their personal social media posts."

He gleaned most of his information from Facebook, FB -0.62% Twitter TWTR -1.03% and the photo-sharing site Instagram, plus geolocation via smartphones. He was able to call out to people on the street by name. He shocked a family by referring to their pet lizard, congratulated passersby on their recent birthdays, and told others of the meals they had just eaten. Among the reactions: "Wow, you're tripping me out right now," "That is really creepy," and "Ew!" One man was unamused: "Thanks for invading our privacy. I'll call the police if you do that again." (Watch the video at

Mr. Vale explains in a follow-up video how easy it was to find the personal information. The lesson: "Your information isn't as private as you might think it is to total strangers." The prank video has been viewed more than 2.5 million times.

Debates about privacy tend to be conducted in the abstract by regulators, lobbyists and theorists—not by actual users of social media. The Federal Trade Commission has negotiated 20-year consent decrees that give it broad authority over the privacy policies of companies including Facebook (which now owns Instagram) and Google. GOOG -0.46% But regulators and social-media companies can only guess where users want to set their privacy trade-offs—something the users themselves are still deciding.

The good news is that any need for regulation is falling as people better understand the trade-offs they're making, and as new companies offer greater privacy to those who value it.

A schoolteacher recently wanted to remind her students to be careful what they post online. She posted a photo of herself on Facebook holding a sign that read, "I'm talking to my Fifth Grade students about Internet safety and how quickly a photo can be seen by lots of people. If you are reading this, please click 'Like.' Thanks!" She got more feedback than she expected. People edited her photo to replace her face with those of actors and a smiley face. Others Photoshopped the sign into an Alcatraz prisoner ID and the Declaration of Independence.

Microsoft MSFT +1.09% is trying to use privacy as a way to compete with Google. It recently launched its Scroogled Store online, "dedicated to exposing Google's violations of your privacy." You can order coffee mugs, T-shirts and other gear with slogans making fun of its competitor. Microsoft has argued that Google goes too far by targeting ads based on the content of emails and sharing contact information with others. Among the Microsoft slogans are "Keep calm while we steal your data" and "I'm watching you." Skeptics will point out that Microsoft also uses personal information to deliver ads on Bing and only wishes it had the information Google gets from its Gmail, YouTube and Android.

The truism rings ever more true that if you're not paying for the product, then you are the product being sold. Facebook recently updated its privacy settings to make clear how focused it is on serving its paying customers: advertisers. Its "sponsored stories" program rebroadcasts favorable comments users post about products to their friends. Google recently launched its similar "shared endorsements" program that will place users' names, photos and favorable comments on ads running on any of the two million websites belonging to its ad network. New services such as Snapchat offer "ephemeral" communications, which disappear after the recipient views the video or other message, allowing more-private communications.

Many people regard being "sold" to advertisers as a fair trade for the otherwise free services they get from Facebook and Google. But Mr. Vale's video is a useful reminder of how much privacy they're giving up.
Power User
Posts: 7838

« Reply #868 on: December 04, 2013, 08:02:05 AM »

Especially the young and dumb.

This is why Sessions is right.  Anyone think Exxon was the corporation to fear.  What about the internet oligarchs?  Who the hell is going to protect us from abuse from them?

The government?  Why the government can't even protect their own websites?

Who is going to protect us from the abuse and evil that exists in all humanity?

I hear nothing from our representatives Pubs, Crats, or Partiers.  Nothing.  As a victim of information technology from organized crime and American Big entertainment business  I want answers. 

I am still hearing dead Freakin silence.  To may ex party Republicans - it ain't just the government we need to fear>

******Op-Ed Columnist

Mommy, the Drone’s Here!

Published: December 3, 2013 60 Comments

For Op-Ed, follow @nytopinion and to hear from the editorial page editor, Andrew Rosenthal, follow @andyrNYT.
. If you aren’t nervous enough reading about 3-D printers spitting out handguns or Google robots with Android phones, imagine the skies thick with crisscrossing tiny drones.

“I know this looks like science fiction. It’s not,” Jeff Bezos told Charlie Rose on “60 Minutes” Sunday, unveiling his octocopter drones.

The Amazon founder is optimistic that the fleet of miniature robot helicopters clutching plastic containers will be ready to follow GPS coordinates within a radius of 10 miles and zip around the country providing half-hour delivery of packages of up to 5 pounds — 86 percent of Amazon’s stock — just as soon as the F.A.A. approves.

“Wow!” Rose said, absorbing the wackiness of it all.

The futuristic Pony Express to deliver pony-print coats and other Amazon goodies will be “fun,” Bezos said, and won’t start until they have “all the systems you need to say, ‘Look, this thing can’t land on somebody’s head while they’re walking around their neighborhood.’ ”

So if they can’t land on my head, why do they make my head hurt? Maybe because they are redolent of President Obama’s unhealthy attachment to lethal drones, which are killing too many innocents in Afghanistan and Pakistan, and our spy agencies’ unhealthy attachment to indiscriminate surveillance.

Or maybe they recall that eerie “Twilight Zone” episode where a Brobdingnagian Agnes Moorehead fends off tiny spaceships with a big wooden stirrer — even though these flying machines would be dropping off the housewares.

Or maybe it’s because after “60 Minutes,” “Homeland” featured a story line about a drone both faulty and morally agnostic. The White House chief of staff, wanting to cover up a bolloxed-up covert operation on the Iraq-Iran border, suggested directing the drone to finish off its own agent, Brody.

“I will not order a strike on our own men,” the acting C.I.A. chief, played by Mandy Patinkin, replied sternly. “Hang it up.”

Or maybe I am leery that Bezos, who is also dabbling in space tourism, was looking for a Cyber Monday p.r. coup by playing to Americans’ ranker instincts, hooking our instant gratification society on ever more instant gratification. Do we really need that argyle sweater plopped in our hands in half an hour as opposed to the next day? What would Pope Francis say?

And won’t all the other alpha moguls want their own drone fleets? Howard Schultz will want to drop your half-caf, bone-dry, ristretto, venti, four-pump, sugar-free, cinnamon dolce, soy, skinny Starbucks latte on the front step at 7 a.m., and Tim Cook will want to deliver the latest Apple toys the soonest, and Disney’s Robert Iger will want his drones gussied up like Mary Poppins.

It will be interesting to watch The Washington Post cover new owner Bezos as he takes on the F.A.A. over drone regulations. The agency is drafting rules to let larger commercial drones and airlines share the sky, with an eye toward issuing licenses in 2015, but a handful of states are passing restrictions of their own.

Lobbying for private unmanned drones, Bezos will be aligned with the Motion Picture Association of America, which is working to get directors the right to use drones for aerial shots.

It’s a business taking flight. Experts say there may be as many as 30,000 unmanned private and government drones flying in this country by 2020, ratcheting drones into a $90 billion industry, generating 100,000 jobs. A degree in drone management can’t be far off.

Politico writes that the logistics of drone delivery will be dizzying: “It’s easy enough to drop a package on someone’s front steps, but what if the person lives in a fifth-floor apartment? Amazon wants to launch the service in large urban areas — could a drone collide with a skyscraper?”

Drones are less restricted abroad. Irish filmmaker Caroline Campbell used one to shoot film of Google and Facebook offices in Dublin, telling Wired, “We feel that it is no more intrusive than something like Google Street View.”

Journalists, police and paparazzi jumped on the drone trend. One photographer dispatched a drone over Tina Turner’s Lake Zurich estate to snap shots of her wedding last summer — before police ordered it grounded.

According to USA Today on Tuesday, all sorts of American businesses are eluding drone restrictions: real estate representatives are getting video of luxury properties; photographers are collecting footage of Hawaiian surfers; Western farmers are monitoring their land; Sonoma vintners are checking on how their grapes are faring. As Rem Rieder wryly noted in that paper, Bezos may eventually let his drones help with home delivery of The Washington Post, “but it’s bad news for kids on bikes.”

Law enforcement agencies are eager to get drones patrolling the beat. And The Wrap reported that in the upcoming Sony remake of “RoboCop,” Samuel L. Jackson’s character, a spokesman for a multinational conglomerate that has to manufacture a special RoboCop with a conscience for America (still traumatized by “The Terminator,” no doubt) scolds Americans for being “robophobic.”
Power User
Posts: 42527

« Reply #869 on: December 05, 2013, 08:18:27 PM »
Power User
Posts: 42527

« Reply #870 on: December 06, 2013, 07:44:39 PM »

Power User
Posts: 42527

« Reply #871 on: December 08, 2013, 10:34:11 AM »
Power User
Posts: 42527

« Reply #872 on: December 11, 2013, 10:47:31 AM »

I have signed this:
Power User
Posts: 42527

« Reply #873 on: December 14, 2013, 12:30:11 PM »
Power User
Posts: 42527

« Reply #874 on: December 16, 2013, 12:04:43 PM »


          December 15, 2013

          by Bruce Schneier
       BT Security Futurologist

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <>.

You can read this issue on the web at
<>. These same essays and news items appear in the "Schneier on Security" blog at <>, along with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:
      NSA Spying on Online Gaming Worlds
      NSA Tracks People Using Google Cookies
      NSA And U.S. Surveillance News
      How Antivirus Companies Handle State-Sponsored Malware
      Surveillance as a Business Model
      Evading Airport Security
      Schneier News
      Crypto-Gram Has Moved
      The TQP Patent

** *** ***** ******* *********** *************

      NSA Spying on Online Gaming Worlds

The NSA is spying on chats in World of Warcraft and other games. There's lots of information -- and a good source document.  While it's fun to joke about the NSA and elves and dwarves from World of Warcraft, this kind of surveillance makes perfect sense.  If, as Dan Geer has pointed out, your assigned mission is to ensure that something never happens, the only way you can be sure that something never happens is to know
*everything* that does happen.  Which puts you in the impossible position of having to eavesdrop on every possible communications channel, including online gaming worlds.

One bit (on page 2) jumped out at me:

     The NMDC engaged SNORT, an open source packet-sniffing
     software, which runs on all FORNSAT survey packet data, to
     filter out WoW packets.  GCHQ provided several WoW protocol
     parsing scripts to process the traffic and produce Warcraft
     metadata from all NMDC FORNSAT survey.

NMDC is the New Mission Development Center, and FORNSAT stands for Foreign Satellite Collection.  MHS, which also appears in the source document, stands for -- I think -- Menwith Hill Station, a satellite eavesdropping location in the UK.

Since the Snowden documents first started being released, I have been saying that while the US has a bigger intelligence budget than the rest of the world's countries combined, agencies like the NSA are not made of magic. They're constrained by the laws of mathematics, physics, and economics -- just like everyone else.  Here's an example.  The NSA is using Snort -- an open source product that anyone can download and use
-- because that's a more cost-effective tool than anything they can develop in-house.

Source document: or

Dan Geer's essay:

** *** ***** ******* *********** *************

      NSA Tracks People Using Google Cookies

The "Washington Post" has a detailed article on how the NSA uses cookie data to track individuals.  The EFF also has a good post on this.

I have been writing and saying that surveillance is the business model of the Internet, and that government surveillance largely piggy backs on corporate capabilities.  This is an example of that.  The NSA doesn't need the cooperation of any Internet company to use their cookies for surveillance purposes, but they do need their capabilities.  And because the Internet is largely unencrypted, they can use those capabilities for their own purposes.

Reforming the NSA is not just about government surveillance.  It has to address the public-private surveillance partnership.  Even as a group of large Internet companies have come together to demand government surveillance reform, they are ignoring their own surveillance activities.  But you can't reform one without the other.  The Free Software Foundation has written about this as well.

Little has been written about how QUANTUM interacts with cookie surveillance.  QUANTUM is the NSA's program for real-time responses to passive Internet monitoring.  It's what allows them to do packet injection attacks.  The NSA's Tor Stinks presentation talks about a subprogram called QUANTUMCOOKIE: "forces clients to divulge stored cookies."  My guess is that the NSA uses frame injection to surreptitiously force anonymous users to visit common sites like Google and Facebook and reveal their identifying cookies.  Combined with the rest of their cookie surveillance activities, this can de-anonymize Tor users if they use Tor from the same browser they use for other Internet activities.

Me on this issue:

Corporations calling for less surveillance:

Free Software Foundation's statement:


Tor Stinks presentation:

** *** ***** ******* *********** *************

      NSA and US Surveillance News

Nicholas Weaver has a great essay explaining how the NSA's QUANTUM
packet injection system works, what we know it does, what else it can
possibly do, and how to defend against it.  Remember that while QUANTUM
is an NSA program, other countries engage in these sorts of attacks as
well. By securing the Internet against QUANTUM, we protect ourselves
against any government or criminal use of these sorts of techniques.

The US is working to kill United Nations resolutions to limit
international surveillance.

This is a long article about the FBI's Data Intercept Technology Unit
(DITU), which is basically its own internal NSA.
There is an enormous amount of information in the article, which exposes
yet another piece of the vast US government surveillance infrastructure.
  It's good to read that "at least two" companies are fighting at least
a part of this.  Any legislation aimed at restoring security and trust
in US Internet companies needs to address the whole problem, and not
just a piece of it.

As more and more media outlets from all over the world continue to
report on the Snowden documents, it's harder and harder to keep track of
what has been released.  The EFF, ACLU, Cryptome,, and
Wikipedia are all trying.  I don't think any are complete.
And this mind map of the NSA leaks is very comprehensive.
This is also good:

** *** ***** ******* *********** *************

      How Antivirus Companies Handle State-Sponsored Malware

Since we learned that the NSA has surreptitiously weakened Internet
security so it could more easily eavesdrop, we've been wondering if it's
done anything to antivirus products. Given that it engages in offensive
cyberattacks -- and launches cyberweapons like Stuxnet and Flame -- it's
reasonable to assume that it's asked antivirus companies to ignore its
malware.  (We know that antivirus companies have previously done this
for corporate malware.)

My guess is that the NSA has not done this, nor has any other government
intelligence or law enforcement agency.  My reasoning is that antivirus
is a very international industry, and while a government might get its
own companies to play along, it would not be able to influence
international companies.  So while the NSA could certainly pressure
McAfee or Symantec -- both Silicon Valley companies --  to ignore NSA
malware, it could not similarly pressure Kaspersky Labs (Russian),
F-Secure (Finnish), or AVAST (Czech).  And the governments of Russia,
Finland, and the Czech Republic will have comparable problems.

Even so, I joined a group of security experts to ask antivirus companies
explicitly if they were ignoring malware at the behest of a government.
  Understanding that the companies could certainly lie, this is the
response so far: no one has admitted to doing so.  But most vendors
haven't replied.

** *** ***** ******* *********** *************

      Surveillance as a Business Model

Google recently announced that it would start including individual
users' names and photos in some ads. This means that if you rate some
product positively, your friends may see ads for that product with your
name and photo attached -- without your knowledge or consent. Meanwhile,
Facebook is eliminating a feature that allowed people to retain some
portions of their anonymity on its website.

These changes come on the heels of Google's move to explore replacing
tracking cookies with something that users have even less control over.
Microsoft is doing something similar by developing its own tracking

More generally, lots of companies are evading the "Do Not Track" rules,
meant to give users a say in whether companies track them. Turns out the
whole "Do Not Track" legislation has been a sham.

It shouldn't come as a surprise that big technology companies are
tracking us on the Internet even more aggressively than before.

If these features don't sound particularly beneficial to you, it's
because you're not the customer of any of these companies. You're the
product, and you're being improved for their actual customers: their

This is nothing new. For years, these sites and others have
systematically improved their "product" by reducing user privacy. This
excellent infographic, for example, illustrates how Facebook has done so
over the years.

The "Do Not Track" law serves as a sterling example of how bad things
are. When it was proposed, it was supposed to give users the right to
demand that Internet companies not track them. Internet companies fought
hard against the law, and when it was passed, they fought to ensure that
it didn't have any benefit to users. Right now, complying is entirely
voluntary, meaning that no Internet company has to follow the law. If a
company does, because it wants the PR benefit of seeming to take user
privacy seriously, it can still track its users.

Really: if you tell a "Do Not Track"-enabled company that you don't want
to be tracked, it will stop showing you personalized ads. But your
activity will be tracked -- and your personal information collected,
sold and used -- just like everyone else's. It's best to think of it as
a "track me in secret" law.

Of course, people don't think of it that way. Most people aren't fully
aware of how much of their data is collected by these sites. And, as the
"Do Not Track" story illustrates, Internet companies are doing their
best to keep it that way.

The result is a world where our most intimate personal details are
collected and stored. I used to say that Google has a more intimate
picture of what I'm thinking of than my wife does. But that's not far
enough: Google has a more intimate picture than I do. The company knows
exactly what I am thinking about, how much I am thinking about it, and
when I stop thinking about it: all from my Google searches. And it
remembers all of that forever.

As the Edward Snowden revelations continue to expose the full extent of
the National Security Agency's eavesdropping on the Internet, it has
become increasingly obvious how much of that has been enabled by the
corporate world's existing eavesdropping on the Internet.

The public/private surveillance partnership is fraying, but it's largely
alive and well. The NSA didn't build its eavesdropping system from
scratch; it got itself a copy of what the corporate world was already

There are a lot of reasons why Internet surveillance is so prevalent and

One, users like free things, and don't realize how much value they're
giving away to get it. We know that "free" is a special price that
confuses people's thinking.

Google's 2013 third quarter profits were nearly $3 billion; that profit
is the difference between how much our privacy is worth and the cost of
the services we receive in exchange for it.

Two, Internet companies deliberately make privacy not salient. When you
log onto Facebook, you don't think about how much personal information
you're revealing to the company; you're chatting with your friends. When
you wake up in the morning, you don't think about how you're going to
allow a bunch of companies to track you throughout the day; you just put
your cell phone in your pocket.

And three, the Internet's winner-takes-all market means that
privacy-preserving alternatives have trouble getting off the ground. How
many of you know that there is a Google alternative called DuckDuckGo
that doesn't track you? Or that you can use cut-out sites to anonymize
your Google queries? I have opted out of Facebook, and I know it affects
my social life.

There are two types of changes that need to happen in order to fix this.
First, there's the market change. We need to become actual customers of
these sites so we can use purchasing power to force them to take our
privacy seriously. But that's not enough. Because of the market failures
surrounding privacy, a second change is needed. We need government
regulations that protect our privacy by limiting what these sites can do
with our data.

Surveillance is the business model of the Internet -- Al Gore recently
called it a "stalker economy." All major websites run on advertising,
and the more personal and targeted that advertising is, the more revenue
the site gets for it. As long as we users remain the product, there is
minimal incentive for these companies to provide any real privacy.

This essay previously appeared on

Google's actions:,0,419118.story

Facebook's actions:

Microsoft's actions:

Evading "Do Not Track":

Internet tracking by corporations:

The public/private surveillance partnership: or

Al Gore's remarks:

** *** ***** ******* *********** *************


Fokirtor is a Linux Trojan that exfiltrates traffic by inserting it into
SSH connections.  It looks very well-designed and -constructed.

Tips on how to avoid getting arrested, more psychological than security.
Rebuttal and discussion:

Renesys is reporting that Internet traffic is being manipulatively
rerouted, presumably for eavesdropping purposes.  The attacks exploit
flaws in the Border Gateway Protocol (BGP).  The odds that the NSA is
not doing this sort of thing are basically zero, but I'm sure that their
activities are going to be harder to discover.

Safeplug is an easy-to-use Tor appliance.  I like that it can also act
as a Tor exit node.  I know nothing about this appliance, nor do I
endorse it.  In fact, I would like it to be independently audited before
we start trusting it.  But it's a fascinating proof-of-concept of
encapsulating security so that normal Internet users can use it.

Ralph Langer has written the definitive analysis of Stuxnet.  There's a
short, popular version, and long, technical version.

Earlier this month, Eugene Kaspersky said that Stuxnet also damaged a
Russian nuclear power station and the International Space Station.

Some apps are being distributed with secret Bitcoin-mining software
embedded in them.  Coins found are sent back to the app owners, of
course.  And to make it legal, it's part of the  end-user license
agreement (EULA).  This is a great example of why EULAs are bad.  The
stunt that resulted in 7,500 people giving their
immortal souls a few years ago was funny, but hijacking users' computers
for profit is actually bad. or

Here's a new biometric I know nothing about: your heartwave.

Telepathwords is a pretty clever research project that tries to evaluate
password strength.  It's different from normal strength meters, and I
think better.  Password-strength evaluators have generally been pretty
poor, regularly assessing weak passwords as strong (and vice versa).  I
like seeing new research in this area.

This is the best explanation of the Bitcoin protocol that I have read.

** *** ***** ******* *********** *************

      Evading Airport Security

The news is reporting about Evan Booth, who builds weaponry out of items
you can buy after airport security.  It's clever stuff.

It's not new, though.  People have been explaining how to evade airport
security for years.

Back in 2006, I -- and others -- explained how to print your own
boarding pass and evade the photo-ID check, a trick that still seems to
work.  In 2008, I demonstrated carrying two large bottles of liquid
through airport security.  There's a paper about stabbing people with
stuff you can take through airport security.  And there's a German video
of someone building a bomb out of components he snuck through a
full-body scanner.  There's lots more if you start poking around the

So, what's the moral here?  It's not like the terrorists don't know
about these tricks.  They're no surprise to the TSA, either.  If airport
security is so porous, why aren't there more terrorist attacks?  Why
aren't the terrorists using these, and other, techniques to attack
planes every month?

I think the answer is simple: airplane terrorism isn't a big risk. There
are very few actual terrorists, and plots are much more difficult to
execute than the tactics of the attack itself.  It's the same reason why
I don't care very much about the various TSA mistakes that are regularly

Evan Booth: or

Bypassing the boarding pass check at airport security:

Carrying lots of liquids through airport security:

Stabbing people after airport security:

Bringing a bomb through a full-body scanner:

Why terrorism is difficult:

** *** ***** ******* *********** *************

      Schneier News

I did a Reddit "Ask Me Anything" on 22 November.

0-Day Clothing has taken 25 Bruce Schneier Facts and turned them into
T-shirts just in time for Christmas.

I have a new book.  It's "Carry On: Sound Advice from Schneier on
Security," and it's my second collection of essays.  This book covers my
writings from March 2008 to June 2013.  (My first collection of essays,
"Schneier on Security," covered my writings from April 2002 to February
2008.)  There's nothing in this book that hasn't been published before,
and nothing you can't get free off my website.  But if you're looking
for my recent writings in a convenient-to-carry hardcover-book format,
this is the book for you.  Unfortunately, the paper book isn't due in
stores -- either online or brick-and-mortar -- until 12/27, which makes
it a pretty lousy Christmas gift, though Amazon and B&N both claim it'll
be in stock there on December 16.  And if you don't mind waiting until
after the new year, I will sell you a signed copy of the book.

I'm speaking at the Real World Cryptography Workshop in New York on
January 15.

** *** ***** ******* *********** *************

      Crypto-Gram Has Moved

The Crypto-Gram mailing list has moved to a new server and new software
(Mailman). Most of you won't notice any difference -- except that this
month's newsletter should get to you much faster than last month's.
However, if you've saved any old subscribe/unsubscribe instructions that
involve sending e-mail or visiting, those
will no longer work.  If you want to unsubscribe, the easiest thing is
to use the personalized unsubscribe link at the bottom of this e-mail.
And you can always find the current instructions here:

** *** ***** ******* *********** *************

      The TQP Patent

One of the things I do is expert witness work in patent litigations.
Often, it's defending companies against patent trolls.  One of the
patents I have worked on for several defendants is owned by a company
called TQP Development.  The patent owner claims that it covers SSL and
RC4, which it does not.  The patent owner claims that the patent is
novel, which it is not.  Despite this, TQP has managed to make $45
million off the patent, almost entirely as a result of private
settlements.  One company, Newegg, fought and lost -- although it's
planning to appeal

There is legislation pending in the US to help stop patent trolls.  Help
support it.

Patent trolls:

TQP vs Newegg:

Pending US legislation:

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing
summaries, analyses, insights, and commentaries on security: computer
and otherwise. You can subscribe, unsubscribe, or change your address on
the Web at <>. Back issues are
also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to
colleagues and friends who will find it valuable. Permission is also
granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an
internationally renowned security technologist, called a "security guru"
by The Economist. He is the author of 12 books -- including "Liars and
Outliers: Enabling the Trust Society Needs to Survive" -- as well as
hundreds of articles, essays, and academic papers. His influential
newsletter "Crypto-Gram" and his blog "Schneier on Security" are read by
over 250,000 people. He has testified before Congress, is a frequent
guest on television and radio, has served on several government
committees, and is regularly quoted in the press. Schneier is a fellow
at the Berkman Center for Internet and Society at Harvard Law School, a
program fellow at the New America Foundation's Open Technology
Institute, a board member of the Electronic Frontier Foundation, an
Advisory Board Member of the Electronic Privacy Information Center, and
the Security Futurologist for BT -- formerly British Telecom.  See

Crypto-Gram is a personal newsletter. Opinions expressed are not
necessarily those of BT.

Copyright (c) 2013 by Bruce Schneier.
« Last Edit: December 16, 2013, 12:14:32 PM by Crafty_Dog » Logged
Power User
Posts: 42527

« Reply #875 on: December 17, 2013, 10:55:18 AM »

Data Mining to Recruit Sick People
Companies Use Information From Data Brokers, Pharmacies, Social Networks
by Joseph Walker
Dec. 16, 2013 6:53 p.m. ET

Some health-care companies are pulling back the curtain on medical privacy without ever accessing personal medical records, by probing readily available information from data brokers, pharmacies and social networks that offer indirect clues to an individual's health.

Companies specializing in patient recruitment for clinical trials use hundreds of data points—from age and race to shopping habits—to identify the sick and target them with telemarketing calls and direct-mail pitches to participate in research.

Blue Chip Marketing Worldwide, a drug-industry contractor, found patients for an obesity drug by targeting people with characteristics suggestive of a sedentary lifestyle, like subscribing to premium cable TV and frequent fast-food dining. Acurian Inc., one of the largest recruitment companies, says innocuous personal details—a preference for jazz, owning a cat or participation in sweepstakes—helped it home in on patients for an arthritis study.

Some health-care companies are pulling back the curtain on medical privacy without ever accessing personal medical records, by probing readily available information from data brokers, pharmacies and social networks. Joseph Walker reports. Photo: Getty Images.

"We are now at a point where, based on your credit-card history, and whether you drive an American automobile and several other lifestyle factors, we can get a very, very close bead on whether or not you have the disease state we're looking at," said Roger Smith, senior vice president of operations at Horsham, Pa.-based Acurian, a unit of Pharmaceutical Product Development LLC.

Targeted advertising has long been used in the retail industry, but its use in health care is raising new concerns. Privacy experts and bioethicists say that as data-mining methods become more sophisticated, it is becoming harder to keep medical conditions private. Targeted consumers have complained to regulators about intrusive tactics and worries that their medical records have been compromised.

    Next in Tech: App Helps Patients Track Care

"My private information, especially my medical information, I'm extremely protective of it," says Delbert Kerby, 62 years old, of Rocklin, Calif. The telecommunications consultant says he was surprised when telemarketers called him last year about a study of arthritis. The company didn't leave its name, he says, but he filed a complaint with the Federal Trade Commission about the call. (He has arthritis but has no idea how the company targeted him.)

Federal law bars doctors, insurers and other health-care providers from sharing or selling personally identifiable information in patients' medical records without permission, under the Health Insurance Portability and Accountability Act, or HIPAA. The law doesn't, however, protect the clues that people leave about their health outside of their medical records—when they make credit-card purchases or search the Internet. Law professor Nicolas P. Terry calls such information "medically inflected data."

"I think patients would be shocked to find out how little privacy protection they have outside of traditional health care," says Mr. Terry, professor and co-director at the Center for Law and Health at Indiana University's law school. He adds, "Big Data essentially can operate in a HIPAA-free zone."

Research firms and patient recruiters, including both Blue Chip and Acurian, say they abide by HIPAA and privacy laws.

Experian EXPN.LN -0.09% PLC, the Dublin, Ireland-based data broker and credit-reporting company, says its marketing-services unit sells data to numerous health-care marketing companies. "However, we do not share any protected health information, and therefore are not providing data that would fall into HIPAA requirements," says Gerry Tschopp, senior vice president for public affairs.

A driver of the trend is the need to speed up recruitment and completion of clinical trials. Drug makers often need thousands of patients for late-stage trials, which can take years to accomplish, lengthening the time it takes to bring a drug to market while the clock is running on the drug's patent exclusivity.

When Orexigen Therapeutics Inc., OREX -2.18% a La Jolla, Calif.-based biotechnology company, needed to enroll 9,000 patients into a study of its diet drug Contrave last year, it turned to Blue Chip. Consultants had said it would take two years to finish enrollment, a timeline that was "not acceptable," says Mark Booth, Orexigen's chief commercial officer.

Blue Chip, of Northbrook, Ill., recruited half of all study patients, helping to complete enrollment in a little over six months, Mr. Booth says. With consumer profiles purchased from data companies like Experian, Blue Chip applied a computer algorithm to flag clues about a person's weight, such as fast-food dining and a history of shopping online for clothes, a trait indicative of obesity because overweight people often can't find plus-sizes in traditional stores or are uncomfortable shopping in public, Blue Chip says.

"The types of magazines you buy, how often you buy running shorts, all of those things tell a story," says Blue Chip Executive Vice President Ken Shore.

Orexigen said last week it had submitted a new drug application for Contrave, and the FDA could make a decision in 2014.

The majority of patients are still recruited through traditional channels such as health-care providers and television ads; newer methods like data mining and social networks account for about 14% of the tactics used by drug makers and their contractors, according to the Tufts Center for the Study of Drug Development. Blue Chip, which also uses traditional advertising, says it charges about $2,000 for each patient it enrolls into a study.

Profiling patients based on demographics and purchasing habits, however, can be more effective in finding people who aren't online or haven't recently sought medical treatment, recruitment professionals say.

FTC Commissioner Julie Brill says she is worried that the use of nonprotected consumer data can be used to deny employment or inadvertently reveal illnesses that people want kept secret. "As Big Data algorithms become more accurate and powerful, consumers need to know a lot more about the ways in which their data is used," Ms. Brill says.

Acurian, which has worked with large drug and medical-device companies such as Eli Lilly LLY -0.48% & Co. and Medtronic Inc., MDT +0.02% has been the subject of more than 500 complaints to the FTC over the past two years, alleging violations of telemarketing laws, according to records obtained through a public records request. The FTC hasn't taken any actions against Acurian, said agency spokesman Mitchell Katz. The commission doesn't comment on current investigations as a matter of policy, he said.

Acurian, named as a defendant in a federal lawsuit related to its telemarketing practices, declined to comment on the allegations. In court documents, the company has said that calls related to medical studies aren't advertisements as defined by law.

A Medtronic spokeswoman said the company had hired Acurian for projects like contacting patients from completed studies, but not to identify new study subjects. An Eli Lilly spokeswoman said the company works with Acurian on recruitment campaigns, including through direct mail.

Larna Godsey, of Wichita, Kan., says she received a dozen phone calls about a diabetes drug study over the past year from a company that didn't identify itself. Ms. Godsey, 63, doesn't suffer from the disease, but she has researched it on the Internet and donated to diabetes-related causes. "I don't know if it's just a coincidence or if they're somehow getting my information," says Ms. Godsey, who filed a complaint with the FTC this year.
Power User
Posts: 7838

« Reply #876 on: December 17, 2013, 07:08:26 PM »

That is why today's meeting of the tech CEOs and Obama strikes me as a joke.

It is like the one of the most corrupt politicians in American history meeting with organized crime figures.

Both guilty of the same think yet pretending they are not linked or two peas of the same pod.

Who is going to protect us from this stuff.   BD says not to worry we have regulations.

Are you kidding.  No one even enforces the ones we have.
Power User
Posts: 42527

« Reply #877 on: December 19, 2013, 11:59:32 AM »
Power User
Posts: 42527

« Reply #878 on: December 19, 2013, 03:26:20 PM »

second post
Power User
Posts: 42527

« Reply #879 on: December 20, 2013, 06:19:16 PM »
Power User
Posts: 2268

« Reply #880 on: December 20, 2013, 09:45:12 PM »

I know that it is a mistake to link to the HuffPo, but I think might interest some of you. It is a talk given in March:
Power User
Posts: 42527

« Reply #881 on: December 29, 2013, 03:01:01 PM »
Power User
Posts: 42527

« Reply #882 on: December 30, 2013, 11:34:47 AM »
Power User
Posts: 42527

« Reply #883 on: January 02, 2014, 01:22:35 PM »
Power User
Posts: 42527

« Reply #884 on: January 02, 2014, 02:56:30 PM »

second post

Does this also apply to the 100 mile border zone?
« Last Edit: January 02, 2014, 03:01:40 PM by Crafty_Dog » Logged
Power User
Posts: 42527

« Reply #885 on: January 04, 2014, 03:31:13 PM »

FINRA wants the ability to look at every transaction in your brokerage account.
Power User
Posts: 42527

« Reply #886 on: January 06, 2014, 10:34:54 AM »
Power User
Posts: 42527

« Reply #887 on: January 17, 2014, 03:19:00 PM »
Power User
Posts: 7838

« Reply #888 on: January 17, 2014, 09:38:13 PM »

The next step would be tag every person with data collecting devices like biologists tag animals to track their behavior.

Anyone who thinks this kind of power will not be abused is a total nut job.  And that includes Peter King who on Geraldo radio this AM was claiming about the NSA.

What did the fool say?  These are professionals dedicated to our safety?  Can anyone prove one shred of evidence of abuse?

My answer is simply yes - Ed Snowden just did. 

How could the rest of us prove anything King?  How would anyone even know?

What is he kidding?

King is way off my list.  I would rather vote for Hillary.  Is this what Republicans have to offer?
Power User
Posts: 42527

« Reply #889 on: January 20, 2014, 01:25:59 PM »
Power User
Posts: 2268

« Reply #890 on: January 21, 2014, 09:48:07 AM »

Damn, both for the father and for the info collected:

From the article:

Mike Seay of Lindenhurst, Ill. received the piece of mail Thursday that listed in the address line below his name: "Daughter Killed in Car Crash."

"Why would they have that type of information? Why would they need that?" Seay told NBC.
« Last Edit: January 21, 2014, 10:28:24 AM by Crafty_Dog » Logged
« Reply #891 on: January 22, 2014, 08:23:36 PM »

Hmm, looks like every time I drive my wife to the eye clinic at Johns Hopkins the trip will turn into a Big Brother crapshoot:
Power User
Posts: 15533

« Reply #892 on: January 22, 2014, 08:44:27 PM »

Hmm, looks like every time I drive my wife to the eye clinic at Johns Hopkins the trip will turn into a Big Brother crapshoot:

I'm curious what the PC for the stop would be.
Power User
Posts: 15533

« Reply #893 on: January 22, 2014, 10:21:46 PM »

I wonder if a MD judge would uphold the search of a vehicle based on an out of state medical marijuana card? What about a possible illegal alien from a state that gives them driver's licenses?
Power User
Posts: 15533

« Reply #894 on: January 22, 2014, 10:27:40 PM »

If I was in such circumstances, I'd provide the requested documents. If questioned about firearms, I'd state in a polite manner that I would not answer any questions without an attorney present. I would not consent to a search and I would politely ask if I was still being detained.
Power User
Posts: 15533

« Reply #895 on: January 22, 2014, 10:32:56 PM »

Oh yeah, be sure not a bit of firearm accessory or stray ammo is in your vehicle. It's not like you'll get the professional journalist gun law exemption David Gregory got...
Power User
Posts: 42527

« Reply #896 on: January 23, 2014, 12:27:37 AM »

Good find BBG!

Good analysis GM.

Defending our Freedom increasingly requires our will and intent.
Power User
Posts: 42527

« Reply #897 on: January 24, 2014, 11:26:14 AM »
« Reply #898 on: January 24, 2014, 10:30:30 PM »

Johns Hopkins is near an ugly part of town. The drive out requires traversing the west side of Baltimore--a place that gives Detroit a run for its money (think The Wire)--and several dozen traffic signals. Without copping to any putative crimes, it annoys the bejesus out of me to have to chose between being able to effectively protect me and mine in ugly neighborhoods and getting my wife the best eye care available.
Power User
Posts: 42527

« Reply #899 on: January 27, 2014, 12:03:17 PM »

Spy Agencies Scour Mobile Phone Apps for Personal Data, Documents Say

When a smartphone user opens Angry Birds, the popular game application, and starts slinging birds at chortling green pigs, spy agencies have plotted how to lurk in the background to snatch data revealing the player’s location, age, sex and other personal information, according to secret British intelligence documents.
In their globe-spanning surveillance for terrorism suspects and other targets, the National Security Agency and its British counterpart have been trying to exploit a basic byproduct of modern telecommunications: With each new generation of mobile phone technology, ever greater amounts of personal data pour onto networks where spies can pick it up.

According to dozens of previously undisclosed classified documents, among the most valuable of those unintended intelligence tools are so-called leaky apps that spew everything from users’ smartphone identification codes to where they have been that day.

The N.S.A. and Britain’s Government Communications Headquarters were working together on how to collect and store data from dozens of smartphone apps by 2007, according to the documents, provided by Edward J. Snowden, the former N.S.A. contractor. Since then, the agencies have traded recipes for grabbing location and planning data when a target uses Google Maps, and for vacuuming up address books, buddy lists, phone logs and the geographic data embedded in photos when someone sends a post to the mobile versions of Facebook, Flickr, LinkedIn, Twitter and other services.


Pages: 1 ... 16 17 [18] 19 20 ... 22 Print 
« previous next »
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines Valid XHTML 1.0! Valid CSS!