[kictanet] Fwd: CRYPTO-GRAM, August 15, 2019

Nelson Kwaje nelson at web4all.co.ke
Thu Aug 15 08:58:18 EAT 2019


Hello members, this info might be helpful to some.
---------- Forwarded message ---------
From: Bruce Schneier <schneier at schneier.com>
Date: Thu, Aug 15, 2019 at 8:51 AM
Subject: CRYPTO-GRAM, August 15, 2019
To: <nelson at web4all.co.ke>


Crypto-Gram
August 15, 2019

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier at schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram's web page
<https://www.schneier.com/crypto-gram.html>.

Read this issue on the web
<https://www.schneier.com/crypto-gram/archives/2019/0815.html>

These same essays and news items appear in the Schneier on Security
<https://www.schneier.com/> blog, along with a lively and intelligent
comment section. An RSS feed is available.

** *** ***** ******* *********** *************
In this issue:

   1. Palantir's Surveillance Service for Law Enforcement
   <#m_9172732402594497067_cg1>
   2. Zoom Vulnerability <#m_9172732402594497067_cg2>
   3. Identity Theft on the Job Market <#m_9172732402594497067_cg3>
   4. John Paul Stevens Was a Cryptographer <#m_9172732402594497067_cg4>
   5. A Harlequin Romance Novel about Hackers <#m_9172732402594497067_cg5>
   6. Hackers Expose Russian FSB Cyberattack Projects
   <#m_9172732402594497067_cg6>
   7. Science Fiction Writers Helping Imagine Future Threats
   <#m_9172732402594497067_cg7>
   8. Attorney General William Barr on Encryption Policy
   <#m_9172732402594497067_cg8>
   9. Software Developers and Security <#m_9172732402594497067_cg9>
   10. Insider Logic Bombs <#m_9172732402594497067_cg10>
   11. Wanted: Cybersecurity Imagery <#m_9172732402594497067_cg11>
   12. ACLU on the GCHQ Backdoor Proposal <#m_9172732402594497067_cg12>
   13. Another Attack Against Driverless Cars <#m_9172732402594497067_cg13>
   14. Facebook Plans on Backdooring WhatsApp <#m_9172732402594497067_cg14>
   15. How Privacy Laws Hurt Defendants <#m_9172732402594497067_cg15>
   16. Disabling Security Cameras with Lasers <#m_9172732402594497067_cg16>
   17. More on Backdooring (or Not) WhatsApp <#m_9172732402594497067_cg17>
   18. Regulating International Trade in Commercial Spyware
   <#m_9172732402594497067_cg18>
   19. Phone Pharming for Ad Fraud <#m_9172732402594497067_cg19>
   20. Brazilian Cell Phone Hack <#m_9172732402594497067_cg20>
   21. AT&T Employees Took Bribes to Unlock Smartphones
   <#m_9172732402594497067_cg21>
   22. Supply-Chain Attack against the Electron Development Platform
   <#m_9172732402594497067_cg22>
   23. Evaluating the NSA's Telephony Metadata Program
   <#m_9172732402594497067_cg23>
   24. Exploiting GDPR to Get Private Information
   <#m_9172732402594497067_cg24>
   25. Side-Channel Attack against Electronic Locks
   <#m_9172732402594497067_cg25>

** *** ***** ******* *********** *************
Palantir's Surveillance Service for Law Enforcement

*[2019.07.15]*
<https://www.schneier.com/blog/archives/2019/07/palantirs_surve.html>
Motherboard got its hands on
<https://www.vice.com/en_us/article/9kx4z8/revealed-this-is-palantirs-top-secret-user-manual-for-cops>
Palantir's Gotham user's manual, which is used by the police to get
information on people:

The Palantir user guide shows that police can start with almost no
information about a person of interest and instantly know extremely
intimate details about their lives. The capabilities are staggering,
according to the guide:

   - If police have a name that's associated with a license plate, they can
   use automatic license plate reader data to find out where they've been, and
   when they've been there. This can give a complete account of where someone
   has driven over any time period.
   - With a name, police can also find a person's email address, phone
   numbers, current and previous addresses, bank accounts, social security
   number(s), business relationships, family relationships, and license
   information like height, weight, and eye color, as long as it's in the
   agency's database.
   - The software can map out a person's family members and business
   associates of a suspect, and theoretically, find the above information
   about them, too.

All of this information is aggregated and synthesized in a way that gives
law enforcement nearly omniscient knowledge over any suspect they decide to
surveil.

Read the whole article -- it has a lot of details. This seems like a
commercial version of the NSA's XKEYSCORE.

Boing Boing post
<https://boingboing.net/2019/07/12/leaked-palantir-gotham.html>.

Meanwhile
<https://www.engadget.com/2019/07/12/fbi-social-media-monitoring-tool-rfp/>:

The FBI wants to gather more information from social media. Today, it
issued a call for contracts for a new social media monitoring tool
<https://www.fbo.gov/index?s=opportunity&mode=form&id=fe943e551236e0e62ee0843d5803781e&tab=core&_cview=0>.
According to a request-for-proposals (RFP), it's looking for an "early
alerting tool" that would help it monitor terrorist groups, domestic
threats, criminal activity and the like.

The tool would provide the FBI with access to the full social media
profiles of persons-of-interest. That could include information like user
IDs, emails, IP addresses and telephone numbers. The tool would also allow
the FBI to track people based on location, enable persistent keyword
monitoring and provide access to personal social media history. According
to the RFP, "The mission-critical exploitation of social media will enable
the Bureau to detect, disrupt, and investigate an ever growing diverse
range of threats to U.S. National interests."

** *** ***** ******* *********** *************
Zoom Vulnerability

*[2019.07.16]*
<https://www.schneier.com/blog/archives/2019/07/zoom_vulnerabil.html> The
Zoom conferencing app has
<https://tidbits.com/2019/07/09/zoom-and-ringcentral-exploits-allows-remote-webcam-access/>
a
<https://medium.com/bugbountywriteup/zoom-zero-day-4-million-webcams-maybe-an-rce-just-get-them-to-visit-your-website-ac75c83f4ef5>
vulnerability <https://boingboing.net/2019/07/09/wontfix.html> that allows
someone to remotely take over the computer's camera.

It's a bad vulnerability, made worse
<https://medium.com/bugbountywriteup/zoom-zero-day-4-million-webcams-maybe-an-rce-just-get-them-to-visit-your-website-ac75c83f4ef5>
by the fact that it remains even if you uninstall the Zoom app:

This vulnerability allows any website to forcibly join a user to a Zoom
call, with their video camera activated, without the user's permission.

On top of this, this vulnerability would have allowed any webpage to DOS
(Denial of Service) a Mac by repeatedly joining a user to an invalid call.

Additionally, if you've ever installed the Zoom client and then uninstalled
it, you still have a localhost web server on your machine that will happily
re-install the Zoom client for you, without requiring any user interaction
on your behalf besides visiting a webpage. This re-install 'feature'
continues to work to this day.

Zoom didn't take the vulnerability seriously:

This vulnerability was originally responsibly disclosed on March 26, 2019.
This initial report included a proposed description of a 'quick fix' Zoom
could have implemented by simply changing their server logic. It took Zoom
10 days to confirm the vulnerability. The first actual meeting about how
the vulnerability would be patched occurred on June 11th, 2019, only 18
days before the end of the 90-day public disclosure deadline. During this
meeting, the details of the vulnerability were confirmed and Zoom's planned
solution was discussed. However, I was very easily able to spot and
describe bypasses in their planned fix. At this point, Zoom was left with
18 days to resolve the vulnerability. On June 24th after 90 days of
waiting, the last day before the public disclosure deadline, I discovered
that Zoom had only implemented the 'quick fix' solution originally
suggested.

This is why we disclose vulnerabilities. Now, finally, Zoom is taking this
seriously and fixing it for real.

EDITED TO ADD (8/8): Apple silently released
<https://www.theverge.com/2019/7/10/20689644/apple-zoom-web-server-automatic-removal-silent-update-webcam-vulnerability>
a macOS update that removes the Zoom webserver if the app is not present.

** *** ***** ******* *********** *************
Identity Theft on the Job Market

*[2019.07.18]*
<https://www.schneier.com/blog/archives/2019/07/identity_theft_.html>
Identity theft is getting more subtle: "My job application was withdrawn by
someone pretending to be me <https://www.bbc.com/news/business-48995846>":

When Mr Fearn applied for a job at the company he didn't hear back.

He said the recruitment team said they'd get back to him by Friday, but
they never did.

At first, he assumed he was unsuccessful, but after emailing his contact
there, it turned out someone had created a Gmail account in his name and
asked the company to withdraw his application.

Mr Fearn said the talent assistant told him they were confused because he
had apparently emailed them to withdraw his application on Wednesday.

"They forwarded the email, which was sent from an account using my name."

He said he felt "really shocked and violated" to find out that someone had
created an email account in his name just to tarnish his chances of getting
a role.

This is about as low-tech as it gets. It's trivially simple for me to open
a new Gmail account using a random first and last name. But because people
innately trust email, it works.

** *** ***** ******* *********** *************
John Paul Stevens Was a Cryptographer

*[2019.07.19]*
<https://www.schneier.com/blog/archives/2019/07/john_paul_steve.html> I
didn't know that Supreme Court Justice John Paul Stevens "was also a
cryptographer
for the Navy <https://epic.org/2019/07/justice-john-paul-stevens-1920.html>
during World War II." He was a proponent of individual
<https://www.law360.com/health/articles/1179418/3-times-justice-stevens-boosted-privacy-rights>
privacy <https://epic.org/privacy/justice_stevens.html>.

EDITED TO ADD (8/12): More
<https://stationhypo.com/2019/07/17/remembering-the-honorable-john-paul-stevens-justice-u-s-supreme-court-and-wwii-navy-cryptologist/>
on his cryptography career.

** *** ***** ******* *********** *************
A Harlequin Romance Novel about Hackers

*[2019.07.19]*
<https://www.schneier.com/blog/archives/2019/07/a_harlequin_rom.html> Really
<https://arstechnica.com/gaming/2019/07/talk-cyber-to-me-ars-reads-a-harlequin-hacker-romance/>
.

** *** ***** ******* *********** *************
Hackers Expose Russian FSB Cyberattack Projects

*[2019.07.22]*
<https://www.schneier.com/blog/archives/2019/07/hackers_expose_.html> More
nation-state activity in cyberspace, this time from Russia
<https://www.zdnet.com/article/hackers-breach-fsb-contractor-expose-tor-deanonymization-project/#ftag=CAD-00-10aag7e>
:

Per the different reports in Russian media, the files indicate that SyTech
had worked since 2009 on a multitude of projects since 2009 for FSB unit
71330 and for fellow contractor Quantum. Projects include:

   - *Nautilus* -- a project for collecting data about social media users
   (such as Facebook, MySpace, and LinkedIn).
   - *Nautilus-S* -- a project for deanonymizing Tor traffic with the help
   of rogue Tor servers.
   - *Reward* -- a project to covertly penetrate P2P networks, like the one
   used for torrents.
   - *Mentor* -- a project to monitor and search email communications on
   the servers of Russian companies.
   - *Hope* -- a project to investigate the topology of the Russian
   internet and how it connects to other countries' network.
   - *Tax-3* -- a project for the creation of a closed intranet to store
   the information of highly-sensitive state figures, judges, and local
   administration officials, separate from the rest of the state's IT networks.

BBC Russia, who received the full trove of documents, claims there were
other older projects for researching other network protocols such as Jabber
(instant messaging), ED2K (eDonkey), and OpenFT (enterprise file transfer).

Other files posted on the Digital Revolution Twitter account claimed that
the FSB was also tracking students and pensioners.

** *** ***** ******* *********** *************
Science Fiction Writers Helping Imagine Future Threats

*[2019.07.23]*
<https://www.schneier.com/blog/archives/2019/07/science_fiction_1.html> The
French army is going to put together
<https://www.bbc.com/news/world-europe-49044892> a team of science fiction
writers to help imagine future threats.

Leaving aside the question of whether science fiction writers are better or
worse at envisioning nonfictional futures, this isn't new. The US
Department of Homeland Security did the same thing over a decade ago,
and I wrote
about it
<https://www.schneier.com/essays/archives/2009/06/how_science_fiction.html>
back then:

A couple of years ago, the Department of Homeland Security hired a bunch of
science fiction writers to come in for a day and think of ways terrorists
could attack America. If our inability to prevent 9/11 marked a failure of
imagination, as some said at the time, then who better than science fiction
writers to inject a little imagination into counterterrorism planning?

I discounted the exercise
<http://www.wired.com/dangerroom/2007/05/homeland_securi/> at the time,
calling it "embarrassing." I never thought that 9/11 was a failure of
imagination. I thought, and still think, that 9/11 was primarily a
confluence of three things: the dual failure of centralized coordination
and local control within the FBI, and some lucky breaks on the part of the
attackers. More imagination leads to more movie-plot threats
<https://www.schneier.com/essay-087.html> -- which contributes to overall
fear and overestimation of the risks. And that doesn't help keep us safe at
all.

Science fiction writers are creative, and creativity helps in any future
scenario brainstorming. But please, keep the people who actually know
science and technology in charge.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders
Conference in Washington D.C., science fiction writers
<http://www.washingtonpost.com/wp-dyn/content/article/2009/05/21/AR2009052104379.html>
helped the attendees think differently about security. This seems like a
far better use of their talents than imagining some of the zillions of ways
terrorists can attack America.

** *** ***** ******* *********** *************
Attorney General William Barr on Encryption Policy

*[2019.07.24]*
<https://www.schneier.com/blog/archives/2019/07/attorney_genera_1.html>
Yesterday, Attorney General William Barr gave a major speech on encryption
policy -- what is commonly known as "going dark." Speaking at Fordham
University in New York, he
<https://techcrunch.com/2019/07/23/william-barr-consumers-security-risks-backdoors/>
admitted <https://twitter.com/charlie_savage/status/1153692366966013952>
that adding backdoors decreases security but that it is worth it.

Some hold this view dogmatically, claiming that it is technologically
impossible to provide lawful access without weakening security against
unlawful access. But, in the world of cybersecurity, we do not deal in
absolute guarantees but in relative risks. All systems fall short of
optimality and have some residual risk of vulnerability a point which the
tech community acknowledges when they propose that law enforcement can
satisfy its requirements by exploiting vulnerabilities in their products.
The real question is whether the residual risk of vulnerability resulting
from incorporating a lawful access mechanism is materially greater than
those already in the unmodified product. The Department does not believe
this can be demonstrated.

Moreover, even if there was, in theory, a slight risk differential, its
significance should not be judged solely by the extent to which it falls
short of theoretical optimality. Particularly with respect to encryption
marketed to consumers, the significance of the risk should be assessed
based on its practical effect on consumer cybersecurity, as well as its
relation to the net risks that offering the product poses for society.
After all, we are not talking about protecting the Nation's nuclear launch
codes. Nor are we necessarily talking about the customized encryption used
by large business enterprises to protect their operations. We are talking
about consumer products and services such as messaging, smart phones,
e-mail, and voice and data applications. If one already has an effective
level of security say, by way of illustration, one that protects against 99
percent of foreseeable threats is it reasonable to incur massive further
costs to move slightly closer to optimality and attain a 99.5 percent level
of protection? A company would not make that expenditure; nor should
society. Here, some argue that, to achieve at best a slight incremental
improvement in security, it is worth imposing a massive cost on society in
the form of degraded safety. This is untenable. If the choice is between a
world where we can achieve a 99 percent assurance against cyber threats to
consumers, while still providing law enforcement 80 percent of the access
it might seek; or a world, on the other hand, where we have boosted our
cybersecurity to 99.5 percent but at a cost reducing law enforcements [sic]
access to zero percent the choice for society is clear.

I think this is a major change in government position. Previously, the FBI,
the Justice Department and so on had claimed that backdoors for law
enforcement could be added without any loss of security. They maintained
that technologists just need to figure out how—an approach we have
derisively named "nerd harder
<https://boingboing.net/2018/01/12/imaginary-numbers.html>."

With this change, we can finally have a sensible policy conversation. Yes,
adding a backdoor increases our collective security because it allows law
enforcement to eavesdrop on the bad guys. But adding that backdoor also
decreases our collective security because the bad guys can eavesdrop on
everyone. This is exactly the policy debate we should be havingnot the fake
one about whether or not we can have both security and surveillance.

Barr makes the point that this is about "consumer cybersecurity," and not
"nuclear launch codes." This is true, but ignores the huge amount of
national security-related communications between those two poles. The same
consumer communications and computing devices are used by our lawmakers,
CEOs, legislators, law enforcement officers, nuclear power plant operators,
election officials and so on. There's no longer a difference between
consumer tech and government tech -- it's all the same tech.

Barr also says:

Further, the burden is not as onerous as some make it out to be. I served
for many years as the general counsel of a large telecommunications
concern. During my tenure, we dealt with these issues and lived through the
passage and implementation of CALEA the Communications Assistance for Law
Enforcement Act. CALEA imposes a statutory duty on telecommunications
carriers to maintain the capability to provide lawful access to
communications over their facilities. Companies bear the cost of compliance
but have some flexibility in how they achieve it, and the system has by and
large worked. I therefore reserve a heavy dose of skepticism for those who
claim that maintaining a mechanism for lawful access would impose an
unreasonable burden on tech firms especially the big ones. It is absurd to
think that we would preserve lawful access by mandating that physical
telecommunications facilities be accessible to law enforcement for the
purpose of obtaining content, while allowing tech providers to block law
enforcement from obtaining that very content.

That telecommunications company was GTEwhich became Verizon
<https://biglawbusiness.com/william-barr-would-bring-telecom-chops-to-justice-department>.
Barr conveniently ignores that CALEA-enabled phone switches were used to spy
<https://www.schneier.com/blog/archives/2007/07/story_of_the_gr_1.html> on
<https://spectrum.ieee.org/telecom/security/the-athens-affair> government
officials in Greece in 2003—which seems to have been an NSA operation—and
on a variety of people in Italy
<https://en.wikipedia.org/wiki/SISMI-Telecom_scandal> in 2006. Moreover, in
2012 every CALEA-enabled switch sold to the Defense Department had security
vulnerabilities
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2028152>. (I wrote
about all this, and more, in 2013
<https://www.schneier.com/blog/archives/2013/06/the_problems_wi_3.html>.)

The final thing I noticed about the speech is that is it not about iPhones
and data at rest. It is about communications—data in transit. The "going
dark" debate has bounced back and forth between those two aspects for
decades. It seems to be bouncing once again.

I hope that Barr's latest speech signals that we can finally move on from
the fake security vs. privacy debate, and to the real security vs. security
<https://opensource.com/article/18/6/listening-susan-landau> debate. I know
where I stand on that: As computers continue to permeate every aspect of
our lives, society, and critical infrastructure, it is much more important
to ensure that they are secure from everybody -- even at the cost of
law-enforcement access -- than it is to allow access at the cost of
security. Barr is wrong, it kind of is like these systems are protecting
nuclear launch codes.

This essay previously appeared
<https://www.lawfareblog.com/attorney-general-william-barr-encryption-policy>
on Lawfare.com.

EDITED TO ADD: More
<https://arstechnica.com/tech-policy/2019/07/tech-firms-can-and-must-put-backdoors-in-encryption-ag-barr-says/>
news
<https://www.wsj.com/articles/barr-revives-debate-over-warrant-proof-encryption-11563894048>
articles
<https://www.nytimes.com/2019/07/23/us/politics/william-barr-encryption-security.html>
.

EDITED TO ADD (7/28): Gen. Hayden comments
<https://twitter.com/GenMhayden/status/1153722298861535232>.

EDITED TO ADD (7/30): Good response
<https://blog.erratasec.com/2019/07/why-we-fight-for-crypto.html> by Robert
Graham.

** *** ***** ******* *********** *************
Software Developers and Security

*[2019.07.25]*
<https://www.schneier.com/blog/archives/2019/07/software_develo.html>
According to a survey
<https://www.zdnet.com/article/no-love-lost-between-security-specialists-and-developers/>:
"68% of the security professionals surveyed believe it's a programmer's job
to write secure code, but they also think less than half of developers can
spot security holes." And that's a problem.

Nearly half of security pros surveyed, 49%, said they struggle to get
developers to make remediation of vulnerabilities a priority. Worse still,
68% of security professionals feel fewer than half of developers can spot
security vulnerabilities later in the life cycle. Roughly half of security
professionals said they most often found bugs after code is merged in a
test environment.

At the same time, nearly 70% of developers said that while they are
expected to write secure code
<https://learn.gitlab.com/c/2019-global-develope>, they get little guidance
or help. One disgruntled programmer said, "It's a mess, no standardization,
most of my work has never had a security scan."

Another problem is it seems many companies don't take security seriously
enough. Nearly 44% of those surveyed reported that they're not judged on
their security vulnerabilities.

** *** ***** ******* *********** *************
Insider Logic Bombs

*[2019.07.26]*
<https://www.schneier.com/blog/archives/2019/07/insider_logic_b.html> Add
to the "not very smart criminals" file
<https://www.zdnet.com/article/siemens-contractor-pleads-guilty-to-planting-logic-bomb-in-company-spreadsheets/>
:

According to court documents, Tinley provided software services for
Siemens' Monroeville, PA offices for nearly ten years. Among the work he
was asked to perform was the creation of spreadsheets that the company was
using to manage equipment orders.

The spreadsheets included custom scripts that would update the content of
the file based on current orders stored in other, remote documents,
allowing the company to automate inventory and order management.

But while Tinley's files worked for years, they started malfunctioning
around 2014. According to court documents
<https://www.scribd.com/document/419389583/5948342-0-19339>, Tinley planted
so-called "logic bombs" that would trigger after a certain date, and crash
the files.

Every time the scripts would crash, Siemens would call Tinley, who'd fix
the files for a fee.

** *** ***** ******* *********** *************
Wanted: Cybersecurity Imagery

*[2019.07.29]*
<https://www.schneier.com/blog/archives/2019/07/wanted_cybersec.html> Eli
Sugarman of the Hewlettt Foundation laments
<https://www.lawfareblog.com/sorry-state-cybersecurity-imagery> about the
sorry state of cybersecurity imagery:

The state of cybersecurity imagery is, in a word, abysmal. A simple Google
Image search for the term proves the point: It's all white men in hoodies
hovering menacingly over keyboards, green "Matrix"-style 1s and 0s, glowing
locks and server racks, or some random combination of those
elements—sometimes the hoodie-clad men even wear burglar masks. Each of
these images fails to convey anything about either the importance or the
complexity of the topic—or the huge stakes for governments, industry and
ordinary people alike inherent in topics like encryption, surveillance and
cyber conflict.

I agree that this is a problem. It's not something I noticed until
recently. I work in words. I think in words. I don't use PowerPoint (or
anything similar) when I give presentations. I don't need visuals.

But recently, I started teaching at the Harvard Kennedy School, and I
constantly use visuals in my class. I made those same image searches, and I
came up with similarly unacceptable results.

But unlike me, Hewlett is doing something
<https://hewlett.org/wp-content/uploads/2019/07/Reimagining-Visuals-for-Cybersecurity-Final-Report.pdf>
about it. You can help: participate in the Cybersecurity Visuals Challenge
<https://www.openideo.com/challenge-briefs/cybersecurity-visuals>.

EDITED TO ADD (8/5): News article
<https://www.theverge.com/2019/8/4/20751880/cybersecurity-stock-images-contest-openideo-contest-prize>.
Slashdot thread
<https://idle.slashdot.org/story/19/08/04/1711201/7000-contest-seeks-better-stock-images-for-cybersecurity>
.

** *** ***** ******* *********** *************
ACLU on the GCHQ Backdoor Proposal

*[2019.07.30]*
<https://www.schneier.com/blog/archives/2019/07/aclu_on_the_gch.html> Back
in January, two senior GCHQ officials proposed
<https://www.lawfareblog.com/principles-more-informed-exceptional-access-debate>
a specific backdoor for communications systems. It
<https://www.lawfareblog.com/exceptional-access-devil-details-0> was
<https://blog.cryptographyengineering.com/2018/12/17/on-ghost-users-and-messaging-backdoors/>
universally
<https://www.benthamsgaze.org/2018/12/06/new-threat-models-in-the-face-of-british-intelligence-and-the-five-eyes-new-end-to-end-encryption-interception-strategy/>
derided
<https://www.lawfareblog.com/detecting-ghosts-reverse-engineering-who-ya-gonna-call>
as unworkable -- by me
<https://www.lawfareblog.com/evaluating-gchq-exceptional-access-proposal>,
as well. Now Jon Callas of the ACLU explains why
<https://www.davisvanguard.org/2019/07/the-ghost-user-ploy-to-break-encryption-wont-work/>
.

** *** ***** ******* *********** *************
Another Attack Against Driverless Cars

*[2019.07.31]*
<https://www.schneier.com/blog/archives/2019/07/another_attack_.html> In
this piece of research, attackers successfully attack
<https://arxiv.org/pdf/1906.09765.pdf> a driverless car system -- Renault
Captur's "Level 0" autopilot (Level 0 systems advise human drivers but do
not directly operate cars) -- by following them with drones that project
images of fake road signs in 100ms bursts. The time is too short for human
perception, but long enough to fool the autopilot's sensors.

Boing Boing post
<https://boingboing.net/2019/07/06/flickering-car-ghosts.html>.

** *** ***** ******* *********** *************
Facebook Plans on Backdooring WhatsApp

*[2019.08.01]*
<https://www.schneier.com/blog/archives/2019/08/facebook_plans_.html> This
article
<https://www.forbes.com/sites/kalevleetaru/2019/07/26/the-encryption-debate-is-over-dead-at-the-hands-of-facebook/#78b5874c5362>
points out that Facebook's planned content moderation scheme will result in
an encryption backdoor into WhatsApp:

In Facebook's vision, the actual end-to-end encryption client itself such
as WhatsApp will include embedded content moderation and blacklist
filtering algorithms. These algorithms will be continually updated from a
central cloud service, but will run locally on the user's device, scanning
each cleartext message before it is sent and each encrypted message after
it is decrypted.

The company even noted
<https://www.forbes.com/sites/kalevleetaru/2019/05/28/facebook-is-already-working-towards-germanys-end-to-end-encryption-backdoor-vision/>
that when it detects violations it will need to quietly stream a copy of
the formerly encrypted content back to its central servers to analyze
further, even if the user objects, acting as true wiretapping service.

Facebook's model entirely bypasses the encryption debate by globalizing the
current practice of compromising devices by building those encryption
bypasses directly into the communications clients themselves and deploying
what amounts to machine-based wiretaps to billions of users at once.

Once this is in place, it's easy for the government to demand that Facebook
add another filter -- one that searches for communications that they care
about -- and alert them when it gets triggered.

Of course alternatives like Signal will exist for those who don't want to
be subject to Facebook's content moderation, but what happens when this
filtering technology is built into operating systems?

The problem is that if Facebook's model succeeds, it will only be a matter
of time before device manufacturers and mobile operating system developers
embed similar tools directly into devices themselves, making them
impossible to escape. Embedding content scanning tools directly into phones
would make it possible to scan all apps, including ones like Signal,
effectively ending the era of encrypted communications.

I don't think this will happen -- why does AT&T care about content
moderation -- but it is something to watch?

EDITED TO ADD (8/2): This story is wrong. Read my correction
<https://www.schneier.com/blog/archives/2019/08/more_on_backdoo.html>.

** *** ***** ******* *********** *************
How Privacy Laws Hurt Defendants

*[2019.08.02]*
<https://www.schneier.com/blog/archives/2019/08/how_privacy_law.html>
Rebecca Wexler has an interesting op-ed
<https://www.latimes.com/opinion/story/2019-07-30/consumer-data-privacy-laws-crime-defendants-police-instagram>
about an inadvertent harm that privacy laws can cause: while law
enforcement can often access third-party data to aid in prosecution, the
accused don't have the same level of access to aid in their defense:

The proposed privacy laws would make this situation worse. Lawmakers may
not have set out to make the criminal process even more unfair, but the
unjust result is not surprising. When lawmakers propose privacy bills to
protect sensitive information, law enforcement agencies lobby for exceptions
<https://repository.law.umich.edu/mlr/vol111/iss4/1/> so they can continue
to access the information. Few lobby for the accused to have similar
<https://scholarship.law.cornell.edu/clr/vol99/iss5/1/> rights. Just as the
privacy interests of poor, minority and heavily policed communities are often
ignored <https://www.sup.org/books/title/?id=25115> in the lawmaking
process, so too are the interests of criminal defendants, many from those
same communities.

In criminal cases, both the prosecution and the accused have a right to
subpoena evidence so that juries can hear both sides of the case. The new
privacy bills need to ensure that law enforcement and defense investigators
operate under the same rules when they subpoena digital data. If lawmakers
believe otherwise, they should have to explain and justify that view.

For more detail, see her paper
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3428607>.

** *** ***** ******* *********** *************
Disabling Security Cameras with Lasers

*[2019.08.02]*
<https://www.schneier.com/blog/archives/2019/08/disabling_secur.html>
There's a really interesting video
<https://www.zerohedge.com/news/2019-08-01/watch-hong-kong-protesters-using-lasers-disrupt-facial-recognition-cameras>
of <https://boingboing.net/2019/07/31/just-dont-have-a-face-2.html>
protesters in Hong Kong using some sort of laser to disable security
cameras. I know nothing more about the technologies involved.

EDITED TO ADD (8/14): LIDAR from self-driving cars has damaged
<https://www.theinquirer.net/inquirer/news/3069427/sony-a7r-ii-camera-destroyed-by-self-driving-car-at-ces-2019>
security cameras before.

** *** ***** ******* *********** *************
More on Backdooring (or Not) WhatsApp

*[2019.08.02]*
<https://www.schneier.com/blog/archives/2019/08/more_on_backdoo.html>
Yesterday, I blogged about
<https://www.schneier.com/blog/archives/2019/08/facebook_plans_.html> a
Facebook plan to backdoor WhatsApp by adding client-side scanning and
filtering. It seems that I was wrong, and there are no such plans.

The only source for that post was a Forbes essay
<https://www.forbes.com/sites/kalevleetaru/2019/07/26/the-encryption-debate-is-over-dead-at-the-hands-of-facebook/#618e906a5362>
by Kalev Leetaru, which links to a previous Forbes essay
<https://www.forbes.com/sites/kalevleetaru/2019/05/28/facebook-is-already-working-towards-germanys-end-to-end-encryption-backdoor-vision/#5830e9cd4e4a>
by him, which links to a video presentation
<https://developers.facebook.com/videos/2019/applying-ai-to-keep-the-platform-safe/>
from a Facebook developers conference.

Leetaru extrapolated a lot out of very little. I watched the video (the
relevant section is at the 23:00 mark), and it doesn't talk about
client-side scanning of messages. It doesn't talk about messaging apps at
all. It discusses using AI techniques to find bad content on Facebook, and
the difficulties that arise from dynamic content:

So far, we have been keeping this fight [against bad actors and harmful
content] on familiar grounds. And that is, we have been training our AI
models on the server and making inferences on the server when all the data
are flooding into our data centers.

While this works for most scenarios, it is not the ideal setup for some
unique integrity challenges. URL masking is one such problem which is very
hard to do. We have the traditional way of server-side inference. What is
URL masking? Let us imagine that a user sees a link on the app and decides
to click on it. When they click on it, Facebook actually logs the URL to
crawl it at a later date. But...the publisher can dynamically change the
content of the webpage to make it look more legitimate [to Facebook]. But
then our users click on the same link, they see something completely
different -- oftentimes it is disturbing; oftentimes it violates our policy
standards. Of course, this creates a bad experience for our community that
we would like to avoid. This and similar integrity problems are best solved
with AI on the device.

That might be true, but it also would hand whatever secret-AI sauce
Facebook has to every one of its users to reverse engineer -- which means
it's probably not going to happen. And it is a dumb idea, for reasons Steve
Bellovin has pointed out
<https://www.cs.columbia.edu/~smb/blog/2019-08/2019-08-01.html>.

Facebook's first published response was a comment
<https://news.ycombinator.com/item?id=20587643> on the Hacker News website
from a user named "wcathcart," which Cardozo assures me is Will Cathcart,
the vice president of WhatsApp. (I have no reason to doubt his identity,
but surely there is a more official news channel that Facebook could have
chosen to use if they wanted to.) Cathcart wrote:

We haven't added a backdoor to WhatsApp. The Forbes contributor referred to
a technical talk about client side AI in general to conclude that we might
do client side scanning of content on WhatsApp for anti-abuse purposes.

To be crystal clear, we have not done this, have zero plans to do so, and
if we ever did it would be quite obvious and detectable that we had done
it. We understand the serious concerns this type of approach would raise
which is why we are opposed to it.

Facebook's second published response was a comment
<https://www.schneier.com/blog/archives/2019/08/facebook_plans_.html#c6796703>
on my original blog post, which has been confirmed to me by the WhatsApp
people as authentic. It's more of the same.

So, this was a false alarm. And, to be fair, Alec Muffet called foul
<https://medium.com/@alecmuffett/on-the-latest-facebook-messenger-conspiracy-theory-b214eab3ea58>
on the first Forbes piece:

So, here's my pre-emptive finger wag: Civil Society's pack mentality can
make us our own worst enemies. If we go around repeating one man's Germanic
conspiracy theory, we may doom ourselves to precisely what we fear.
Instead, we should—we must—take steps to constructively demand what we
actually want: End to End Encryption which is worthy of the name.

Blame accepted. But in general, this is the sort of thing we need to watch
for. End-to-end encryption only secures data in transit. The data has to be
in the clear on the device where it is created, and it has to be in the
clear on the device where it is consumed. Those are the obvious places for
an eavesdropper to get a copy.

This has been a long process. Facebook desperately wanted to convince me to
correct the record, while at the same time not wanting to write something
on their own letterhead (just a couple of comments, so far). I spoke at
length with Privacy Policy Manager Nate Cardozo, whom Facebook hired last
December
<https://www.schneier.com/blog/archives/2019/02/facebooks_new_p.html> from
EFF. (Back then, I remember thinking of him -- and the two other new
privacy hires -- as basically human warrant canaries. If they ever leave
Facebook under non-obvious circumstances, we know that things are bad.) He
basically leveraged his historical reputation to assure me that WhatsApp,
and Facebook in general, would never do something like this. I am trusting
him, while also reminding everyone that Facebook has broken so many privacy
promises <https://phys.org/news/2018-03-facebook-history-privacy.html> that
they really can't be trusted.

Final note: If they want to be trusted, Adam Shostack and I gave them a road
map
<https://onezero.medium.com/a-new-privacy-constitution-for-facebook-a7106998f904>
.

Hacker News thread <https://news.ycombinator.com/item?id=20585794>.

EDITED TO ADD (8/4): Slashdot covered
<https://tech.slashdot.org/story/19/08/03/0031206/did-whatsapp-backdoor-rumor-come-from-unanswered-questions--and-leap-of-faith-for-closed-source-encryption-products>
my retraction.

** *** ***** ******* *********** *************
Regulating International Trade in Commercial Spyware

*[2019.08.05]*
<https://www.schneier.com/blog/archives/2019/08/regulating_inte.html> Siena
Anstis, Ronald J. Deibert, and John Scott-Railton of Citizen Lab published
<https://www.lawfareblog.com/proposed-response-commercial-surveillance-emergency>
an editorial calling for regulating the international trade in commercial
surveillance systems until we can figure out how to curb human rights
abuses.

Any regime of rigorous human rights safeguards that would make a meaningful
change to this marketplace would require many elements, for instance,
compliance with the U.N. Guiding Principles on Business and Human Rights
<https://www.ohchr.org/documents/publications/GuidingprinciplesBusinesshr_eN.pdf>.
Corporate tokenism in this space is unacceptable; companies will have to
affirmatively choose human rights concerns over growing profits and hiding
behind the veneer of national security. Considering the lies
<https://www.theguardian.com/technology/2015/jul/06/hacking-team-hacked-firm-sold-spying-tools-to-repressive-regimes-documents-claim>
that have emerged from within the surveillance industry, self-reported
compliance is insufficient; compliance will have to be independently
audited and verified and accept robust measures of outside scrutiny.

The purchase of surveillance technology by law enforcement in any state
must be transparent and subject to public debate. Further, its use must
comply with frameworks setting out the lawful scope of interference with
fundamental rights under international human rights law and applicable
national laws, such as the "Necessary and Proportionate
<https://necessaryandproportionate.org/principles>" principles on the
application of human rights to surveillance. Spyware companies like NSO
Group have relied on rubber stamp approvals by government agencies whose
permission is required to export their technologies abroad. To prevent
abuse, export control systems must instead prioritize a reform agenda that
focuses on minimizing the negative human rights impacts of surveillance
technology and that ensures -- with clear and immediate consequences for
those who fail -- that companies operate in an accountable and transparent
environment.

Finally, and critically, states must fulfill their duty to protect
individuals against third-party interference with their fundamental rights.
With the growth of digital authoritarianism and the alarming consequences
that it may hold for the protection of civil liberties around the world,
rights-respecting countries need to establish legal regimes that hold
companies and states accountable for the deployment of surveillance
technology within their borders. Law enforcement and other organizations
that seek to protect refugees or other vulnerable persons coming from
abroad will also need to take digital threats seriously.

** *** ***** ******* *********** *************
Phone Pharming for Ad Fraud

*[2019.08.06]*
<https://www.schneier.com/blog/archives/2019/08/phone_farming_f.html>
Interesting article
<https://www.vice.com/en_us/article/d3naek/how-to-make-a-phone-farm> on
people using banks of smartphones to commit ad fraud for profit.

No one knows how prevalent ad fraud is on the Internet. I believe it is
surprisingly high -- here's an article
<https://www.emarketer.com/content/five-charts-the-state-of-ad-fraud> that
places losses between $6.5 and $19 billion annually -- and something
companies like Google and Facebook would prefer remain unresearched.

** *** ***** ******* *********** *************
Brazilian Cell Phone Hack

*[2019.08.07]*
<https://www.schneier.com/blog/archives/2019/08/brazilian_cell_.html> I
know there's a lot of politics associated with this story, but concentrate
on the cybersecurity aspect for a moment. The cell phones of a thousand
Brazilians, including senior government officials, were hacked
<https://www.bloomberg.com/news/articles/2019-07-24/brazil-hackers-target-1-000-phones-including-economy-minister-s>
-- seemingly by actors much less sophisticated than rival governments.

Brazil's federal police arrested four people for allegedly hacking 1,000
cellphones belonging to various government officials, including that of
President Jair Bolsonaro.

Police detective João Vianey Xavier Filho said the group hacked into the
messaging apps of around 1,000 different cellphone numbers, but provided
little additional information at a news conference in Brasilia on
Wednesday. Cellphones used by Bolsonaro were among those attacked by the
group, the justice ministry said in a statement on Thursday, adding that
the president was informed of the security breach.

[...]

In the court order
<https://politica.estadao.com.br/blogs/fausto-macedo/wp-content/uploads/sites/41/2019/07/DECIS%C3%83Ospoofing.pdf>
determining the arrest of the four suspects, Judge Vallisney de Souza
Oliveira wrote that the hackers had accessed Moro's Telegram messaging app,
along with those of two judges and two federal police officers.

When I say that smartphone security equals national security, this is the
kind of thing I am talking about.

** *** ***** ******* *********** *************
AT&T Employees Took Bribes to Unlock Smartphones

*[2019.08.08]*
<https://www.schneier.com/blog/archives/2019/08/att_employees_t.html>
This wasn't
a small operation
<https://arstechnica.com/tech-policy/2019/08/att-employees-took-bribes-to-unlock-phones-and-plant-malware-doj-says/>
:

A Pakistani man bribed AT&T call-center employees to install malware and
unauthorized hardware as part of a scheme to fraudulently unlock cell
phones, according to the US Department of Justice. Muhammad Fahd, 34, was
extradited from Hong Kong to the US on Friday and is being detained pending
trial.

An indictment alleges that "Fahd recruited and paid AT&T insiders to use
their computer credentials and access to disable AT&T's proprietary locking
software that prevented ineligible phones from being removed from AT&T's
network," a DOJ announcement
<https://www.justice.gov/usao-wdwa/pr/leader-conspiracy-illegally-unlock-cell-phones-profit-extradited-hong-kong>
yesterday said. "The scheme resulted in millions of phones being removed
from AT&T service and/or payment plans, costing the company millions of
dollars. Fahd allegedly paid the insiders hundreds of thousands of
dollars—paying one co-conspirator $428,500 over the five-year scheme."

In all, AT&T insiders received more than $1 million in bribes from Fahd and
his co-conspirators, who fraudulently unlocked more than 2 million cell
phones, the government alleged. Three former AT&T customer service reps
from a call center in Bothell, Washington, already pleaded guilty and
agreed to pay the money back to AT&T.

** *** ***** ******* *********** *************
Supply-Chain Attack against the Electron Development Platform

*[2019.08.08]*
<https://www.schneier.com/blog/archives/2019/08/supply-chain_at.html>
Electron is a cross-platform development system for many popular
communications apps, including Skype, Slack, and WhatsApp. Security
vulnerabilities
<https://www.contextis.com/en/blog/basic-electron-framework-exploitation>
in the update system allows someone to silently inject malicious code into
applications. From a news article
<https://arstechnica.com/information-technology/2019/08/skype-slack-other-electron-based-apps-can-be-easily-backdoored/>
:

At the BSides LV security conference on Tuesday, Pavel Tsakalidis
demonstrated a tool he created called BEEMKA
<https://www.contextis.com/en/blog/basic-electron-framework-exploitation>,
a Python-based tool that allows someone to unpack Electron ASAR archive
files <https://github.com/electron/asar> and inject new code into
Electron's JavaScript libraries and built-in Chrome browser extensions. The
vulnerability is not part of the applications themselves but of the
underlying Electron framework—and that vulnerability allows malicious
activities to be hidden within processes that appear to be benign.
Tsakalidis said that he had contacted Electron about the vulnerability but
that he had gotten no response—and the vulnerability remains.

While making these changes required administrator access on Linux and
MacOS, it only requires local access on Windows. Those modifications can
create new event-based "features" that can access the file system, activate
a Web cam, and exfiltrate information from systems using the functionality
of trusted applications—including user credentials and sensitive data. In
his demonstration, Tsakalidis showed a backdoored version of Microsoft
Visual Studio Code that sent the contents of every code tab opened to a
remote website.

Basically, the Electron ASAR files aren't signed or encrypted, so modifying
them is easy.

Note that this attack requires local access to the computer, which means
that an attacker that could do this could do much more damaging things as
well. But once an app has been modified, it can be distributed to other
users. It's not a big deal attack, but it's a vulnerability that should be
closed.

** *** ***** ******* *********** *************
Evaluating the NSA's Telephony Metadata Program

*[2019.08.12]*
<https://www.schneier.com/blog/archives/2019/08/evaluating_the_1.html>
Interesting analysis: "Examining the Anomalies, Explaining the Value:
Should the USA FREEDOM Act's Metadata Program be Extended?
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3434358>" by Susan
Landau and Asaf Lubin.

*Abstract:* The telephony metadata program which was authorized under
Section 215 of the PATRIOT Act, remains one of the most controversial
programs launched by the U.S. Intelligence Community (IC) in the wake of
the 9/11 attacks. Under the program major U.S. carriers were ordered to
provide NSA with daily Call Detail Records (CDRs) for all communications
to, from, or within the United States. The Snowden disclosures and the
public controversy that followed led Congress in 2015 to end bulk
collection and amend the CDR authorities with the adoption of the USA
FREEDOM Act (UFA).

For a time, the new program seemed to be functioning well. Nonetheless,
three issues emerged around the program. The first concern was over high
numbers: in both 2016 and 2017, the Foreign Intelligence Surveillance Court
issued 40 orders for collection, but the NSA collected hundreds of millions
of CDRs, and the agency provided little clarification for the high numbers.
The second emerged in June 2018 when the NSA announced the purging of three
years' worth of CDR records for "technical irregularities." Finally, in
March 2019 it was reported that the NSA had decided to completely abandon
the program and not seek its renewal as it is due to sunset in late 2019.

This paper sheds significant light on all three of these concerns. First,
we carefully analyze the numbers, showing how forty orders might lead to
the collection of several million CDRs, thus offering a model to assist in
understanding Intelligence Community transparency reporting across its
surveillance programs. Second, we show how the architecture of modern
telephone communications might cause collection errors that fit the
reported reasons for the 2018 purge. Finally, we show how changes in the
terrorist threat environment as well as in the technology and communication
methods they employ—in particular the deployment of asynchronous encrypted
IP-based communications—has made the telephony metadata program far less
beneficial over time. We further provide policy recommendations for
Congress to increase effective intelligence oversight.

** *** ***** ******* *********** *************
Exploiting GDPR to Get Private Information

*[2019.08.13]*
<https://www.schneier.com/blog/archives/2019/08/exploiting_gdpr.html> A
researcher abused the GDPR <https://www.bbc.com/news/technology-49252501>
to get information on his fiancee:

It is one of the first tests of its kind to exploit the EU's General Data
Protection Regulation (GDPR)
<https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&from=EN>,
which came into force in May 2018. The law shortened the time organisations
had to respond to data requests, added new types of information they have
to provide, and increased the potential penalty for non-compliance.

"Generally if it was an extremely large company -- especially tech ones --
they tended to do really well," he told the BBC.

"Small companies tended to ignore me.

"But the kind of mid-sized businesses that knew about GDPR, but maybe
didn't have much of a specialised process [to handle requests], failed."

He declined to identify the organisations that had mishandled the requests,
but said they had included:

   - a UK hotel chain that shared a complete record of his partner's
   overnight stays
   - two UK rail companies that provided records of all the journeys she
   had taken with them over several years
   - a US-based educational company that handed over her high school
   grades, mother's maiden name and the results of a criminal background check
   survey.

** *** ***** ******* *********** *************
Side-Channel Attack against Electronic Locks

*[2019.08.14]*
<https://www.schneier.com/blog/archives/2019/08/side-channel_at_2.html>
Several high-security electronic locks are vulnerable to side-channel
attacks
<https://www.reuters.com/article/us-locks-cyber-exclusive/exclusive-high-security-locks-for-government-and-banks-hacked-by-researcher-idUSKCN1UW26Z>
involving power monitoring.

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing
summaries, analyses, insights, and commentaries on security technology. To
subscribe, or to read back issues, see Crypto-Gram's web page
<https://www.schneier.com/crypto-gram.html>.

You can also read these articles on my blog, Schneier on Security
<https://www.schneier.com>.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues
and friends who will find it valuable. Permission is also granted to
reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called
a security guru by the Economist. He is the author of over one dozen books
-- including his latest, Click Here to Kill Everybody
<https://www.schneier.com/books/click_here/> -- as well as hundreds of
articles, essays, and academic papers. His newsletter and blog are read by
over 250,000 people. Schneier is a fellow at the Berkman Klein Center for
Internet and Society at Harvard University; a Lecturer in Public Policy at
the Harvard Kennedy School; a board member of the Electronic Frontier
Foundation, AccessNow, and the Tor Project; and an advisory board member of
EPIC and VerifiedVoting.org.

Copyright © 2019 by Bruce Schneier.

** *** ***** ******* *********** *************

Mailing list hosting graciously provided by MailChimp
<https://mailchimp.com/>. Sent without web bugs or link tracking.

This e-mail was sent to: nelson at web4all.co.ke
*You are receiving this e-mail because you subscribed to the Crypto-Gram
newsletter.*

unsubscribe from this list
<https://schneier.us18.list-manage.com/unsubscribe?u=f99e2b5ca82502f48675978be&id=22184111ab&e=28d0f7e71f&c=bb0ba545a7>
    update subscription preferences
<https://schneier.us18.list-manage.com/profile?u=f99e2b5ca82502f48675978be&id=22184111ab&e=28d0f7e71f>
Bruce Schneier · Harvard Kennedy School · 1 Brattle Square · Cambridge, MA
02138 · USA
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.kictanet.or.ke/pipermail/kictanet/attachments/20190815/f884b297/attachment.htm>


More information about the KICTANet mailing list