Monday, October 31, 2016

The Cyber Insurance Emperor Has No Clothes


(Of course, the title is hyperbole and attention-seeking. Now that you are here, I hope you'll keep reading.)

(click to enlarge)
In the Hans Christian Anderson story, The Emperor's New Clothes, the collective delusion of the Emperor's grand clothes was burst by a young child who cried out: "But he has got nothing on!"

I don't mean that cyber insurance has no value or that it is a charade.

My main point: cyber insurance has the wrong clothes for the purposes and social value to which it aspires.

This blog post sketches the argument and evidence. I will be following up separately with more detailed and rigorous analysis (via computational modeling) that, I hope, will be publishable.

tl;dr: (switching metaphors)
As a driving force for better cyber risk management, today's cyber insurance is about as effective as eating soup with a fork.
(This is a long post. For readers who want to "cut to the chase",  you can skip to the "Cyber Insurance is a Functional Misfit" section.)

Wednesday, October 19, 2016

Orange TRUMPeter Swans: When What You Know Ain't So

Was Donald J. Trump's political rise in 2015-2016 a "black swan" event?  "Yes" is the answer asserted by Jack Shafer this Politico article. "No" is the answer from other writers, including David Atkins in this article on the Washington Monthly Political Animal Blog.

Orange Swan
My answer is "Yes", but not in the same way that other events are Black Swans.   Orange Swans like the Trump phenomenon is fits this aphorism:
"It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so." -- attributed to Mark Twain
In other words, the signature characteristic of Orange Swans is delusion.

Rethinking "Black Swans"

As I have mentioned at the start of this series, the "Black Swan event" metaphor is a conceptual mess. (This post is sixth in the series "Think You Understand Black Swans? Think Again".)

It doesn't make sense to label any set of events as "Black Swans".  It's not the events themselves, but instead they are processes that involve generating mechanisms, our evidence about them, and our method of reasoning that make them unexpected and surprising.

Tuesday, June 21, 2016

Public Statement to the Commission on Enhancing National Cybersecurity, 6-21-2016

[Submitted in writing at this meeting. An informal 5 min. version was presented during the public comment period. This statement is my own and does not represent the views or interests of my employer.]

Summary

Cyber security desperately needs institutional innovation, especially involving incentives and metrics.  Nearly every report since 2003 has included recommendations to do more R&D on incentives and metrics, but progress has been slow and inadequate.

Why?

Because we have the wrong model for research and development (R&D) on institutions.

My primary recommendation is that the Commission’s report should promote new R&D models for institutional innovation.  We can learn from examples in other fields, including sustainability, public health, financial services, and energy.

What are Institutions and Institutional Innovation?

Institutions are norms, rules, and social structures that enable society to function. Examples include marriage, consumer credit reporting and scoring, and emissions credit markets.

Cyber security[1] has institutions today, but many are inadequate, dysfunctional, or missing.  Examples:
  1. overlapping “checklists + audits”; 
  2. professional certifications; 
  3. post-breach protection for consumers (e.g. credit monitoring); 
  4. lists of “best practices” that have never been tested or validated as “best” and therefore are no better than folklore.  

There is plenty of talk about “standards”,  “information sharing”, “public-private partnerships”, and “trusted third parties”, but these remain mostly talking points and not realities.

Institutional innovation is a set of processes that either change existing institutions in fundamental ways or create new institutions.   Sometimes this happens with concerted effort by “institutional entrepreneurs”, and other times it happens through indirect and emergent mechanisms, including chance and “happy accidents”.

Institutional innovation takes a long time – typically ten to fifty years.

Institutional innovation works different from technological innovation, which we do well.  In contrast, we have poor understanding of institutional innovation, especially on how to accelerate it or achieve specific goals.

Finally, institutions and institutional innovation should not be confused with “policy”.  Changes to government policy may be an element of institutional innovation, but they do not encompass the main elements – people, processes, technology, organizations, and culture.

The Need: New Models of Innovation

Through my studies, I have come to believe that institutional innovation is much more complicated  [2] than technological innovation.   It is almost never a linear process from theory to practice with clearly defined stages.

There is no single best model for institutional innovation.  There needs to be creativity in “who leads”, “who follows”, and “when”.  The normal roles of government, academics, industry, and civil society organizations may be reversed or otherwise radically redrawn.

Techniques are different, too. It can be orchestrated as a “messy” design process [3].  Fruitful institutional innovation in cyber security might involve some of these:
  • “Skunk Works”
  • Rapid prototyping and pilot tests
  • Proof of Concept demonstrations
  • Bricolage[4]  and exaptation[5]
  • Simulations or table-top exercises
  • Multi-stakeholder engagement processes
  • Competitions and contests
  • Crowd-sourced innovation (e.g. “hackathons” and open source software development)

What all of these have in common is that they produce something that can be tested and can support learning.  They are more than talking and consensus meetings.

There are several academic fields that can contribute defining and analyzing new innovation models, including Institutional Sociology, Institutional Economics, Sociology of Innovation, Design Thinking, and the Science of Science Policy.

Role Models

To identify and test alternative innovation models, we can learn from institutional innovation successes and failures in other fields, including:
  • Common resource management (sustainability)
  • Epidemiology data collection and analysis (public health)
  • Crash and disaster investigation and reporting (safety)
  • Micro-lending and peer-to-peer lending (financial services)
  • Emissions credit markets and carbon offsets (energy)
  • Open software development (technology)
  • Disaster recovery and response[6]  (homeland security)

In fact, there would be great benefit if there were a joint R&D initiative for institutional innovation that could apply to these other fields as well as cyber security.  Furthermore, there would be benefit making this an international effort, not just limited to the United States.

Endnotes

[1] "Cyber security" includes information security, digital privacy, digital identity, digital information property, digital civil rights, and digital homeland & national defense.
[2] For case studies and theory, see: Padgett, J. F., & Powell, W. W. (2012). The Emergence of Organizations and Markets. Princeton, NJ: Princeton University Press.
[3] Ostrom, E. (2009). Understanding Institutional Diversity. Princeton, NJ: Princeton University Press.
[4] “something constructed or created from a diverse range of available things.”
[5]  “a trait that has been co-opted for a use other than the one for which natural selection has built it.”
[6] See: Auerswald, P. E., Branscomb, L. M., Porte, T. M. L., & Michel-Kerjan, E. O. (2006). Seeds of Disaster, Roots of Response: How Private Action Can Reduce Public Vulnerability. Cambridge University Press.




Thursday, April 28, 2016

Wednesday, March 30, 2016

#Tay Twist: @Tayandyou Twitter Account Was Hijacked ...By Bungling Microsoft Test Engineers (Mar. 30)

[Update 5:35am  From CNBC http://www.cnbc.com/2016/03/30/tay-microsofts-ai-program-is-back-online.html:
Microsoft's artificial intelligence (AI) program, Tay, reappeared on Twitter on Wednesday after being deactivated last week for posting offensive messages. However, the program once again went wrong and Tay's account was set to private after it began repeating the same message over and over to other Twitter users. According to a Microsoft, the account was reactivated by accident during testing.
"Tay remains offline while we make adjustments," a spokesperson for the company told CNBC via email. "As part of testing, she was inadvertently activated on Twitter for a brief period of time." (emphasis added)
I'm puzzled by this explanation but I'll go back through the evidence to see which explanation is best supported.]

[Update 6:35am  It now looks like the "account hack" was really a bungled test session by someone at Microsoft Research -- effectively a "self-hack".

Important: This episode was not "Tay being Tay".]

The @Tayandyou Twitter chatbot has been silent since last Thursday when Microsoft shut it down. Shortly after midnight today, Pacific time, the @Tayandyou Twitter account woke up and started blasting tweets at very high volume.  All of these tweets included other Twitter handles in them, maybe from previous tweets, maybe from followers.

But it became immediately apparent that something was different and wrong.  These tweets didn't look anything like the ones before, in style, structure, or sentience.  From the tweet conversations and from the sequence of events, I believe that the @Tayandyou account was hacked today (March 30), and was active for 15 minutes, sending over 4,200 tweets.

[Update 4:30am
The online media has started posting articles, but they all treat this as more "Tay runs amok".  Only The Verge has updated their story.  If you read an article that doesn't at least consider that Tay's Twitter account was hacked, could you please add a comment with link to this post?  Thanks.]

Tuesday, March 29, 2016

Media Coverage of #TayFail Was "All Foam, No Beer"

One of the most surprising things I've discovered in the course of investigating and reporting on Microsoft's Tay chatbot is how the rest of the media (traditional and online) have covered it, and how the digital media works in general.

None of the articles in major media included any investigation or research.  None.  Let that sink in.

All foam, no beer.

Sunday, March 27, 2016

Microsoft's Tay Has No AI

(This is the third of three posts about Tay. Previous posts: "Poor Software QA..." and "...Smoking Gun...")

While nearly all the press about Microsoft's Twitter chatbot Tay (@Tayandyou) is about artificial intelligence (AI) and how AI can be poisoned by trolling users, there is a more disturbing possibility:

  • There is no AI (worthy of the name) in Tay. (probably)

I say "probably" because the evidence is strong but not conclusive and the Microsoft Research team has not publicly revealed their architecture or methods.  But I'm willing to bet on it.

Evidence comes from three places. First is from observing a small non-random sample of Tay tweet and direct message sessions (posted by various users). Second is circumstantial, from composition of the team behind Tay. Third piece of evidence is from a person who claims to have worked at Microsoft Research on Tay until June 2015.  He/she made two comments to my first post, but unfortunately deleted the second comment which had lots of details.