5th Annual PKI R&D Workshop: Making PKI Easy to Use April 4-6, 2006, NIST, Gaithersburg MD The official proceedings are published as a NIST Technical Publication, and available for download: NISTIR 7313. 5th Annual PKI R&D Workshop: Making PKI Easy to Use Proceedings Workshop Summary by Ben Chinowsky Tuesday, April 4, 2006 9:00 am - 9:15 am Opening Remarks Ken Klingenstein, Internet2, General Chair Kent Seamons, Brigham Young University, Program Chair (ppt) 9:15 am - 10:15 am Keynote Address: Has Johnny Learnt To Encrypt By Now? (ppt) Examining the troubled relationship between a security solution and its users Angela Sasse, University College London 10:45 am - 11:45 am Session 1: Standards I Session Chair: Rich Guida, Johnson & Johnson How Trust Had a Hole Blown In It. The Case of X.509 Name Constraints (pdf) David Chadwick, University of Kent Invited Talk: NIST Cryptographic Standards Status Report (ppt) Bill Burr, NIST 11:45 am - 12:45 pm Session 2: Standards II - Leveraging DNSSEC and PK-INIT Session Chair: Neal McBurnett, Internet2 Invited Talk - Trust Infrastructure and DNSSEC Deployment (ppt) Allison Mankin, Consultant Invited Talk - Integrating Public Key and Kerberos (ppt) Jeffrey Altman, Secure Endpoints Inc. 1:00 pm - 2:00 pm LUNCH 2:00 pm - 3:30 pm Session 3: Revocation Session Chair: Von Welch, NCSA ? University of Illinois Invited Talk - Enabling Revocation for Billions of Consumers (ppt) Kelvin Yiu, Microsoft Navigating Revocation through Eternal Loops and Land Mines (ppt) Santosh Chokhani & Carl Wallace, Orion Security Solutions, Inc. 4:00 pm - 5:30 pm Session 4: Easy-to-Use Deployment Architectures Session Chair: Stephen Whitlock, Boeing Simplifying Credential Management through PAM and Online Certificate Authorities (paper: pdf; presentation: ppt) Stephen Chan & Matthew Andrews; NERSC / Lawrence Berkeley National Lab Identity Federation and Attribute-based Authorization through the Globus Toolkit, Shibboleth, GridShib, and MyProxy (paper: pdf; presentation: ppt) Tom Barton, University of Chicago Jim Basney, NCSA/Univ of Illinois Tim Freeman, University of Chicago Tom Scavo, NCSA/University of Illinois Frank Siebenlist, MCSD, Argonne National Laboratory Von Welch, NCSA/University of Illinois Rachana Ananthakrishnan MCSD/ Argonne National Lab Bill Baker, NCSA/University of Illinois Monte Goode, Lawrence Berkeley National Laboratory Kate Keahey, MCSD/Argonne National Lab PKI Interoperability by an Independent, Trusted Validation Authority (paper: pdf; presentation: ppt) Jon Ølnes, DNV Research; Norway Wednesday, April 5, 2006 9:00 am - 10:30 am Session 5: Panel - Digital Signatures Panel Moderator: David Chadwick, University of Kent Panel members Ron DiNapoli, Cornell University (pdf) Anders Rundgren, RSA Security (ppt) Ravi Sandhu, George Mason University (ppt) 11:00 am - 12:45 pm Session 6: Domain Keys Identified Mail (DKIM) and PKI Session Chair: Barry Leiba, IBM Achieving Email Security Usability (paper: pdf; presentation: ppt) Phillip Hallam-Baker, VeriSign, Inc. DKIM Panel Members Jim Fenton, Cisco (pdf) Phillip Hallam-Baker, VeriSign, Inc. Tim Polk, NIST & IETF PKIX Co-chair (ppt) 1:00 pm - 2:00 pm LUNCH 2:00 pm - 3:30 pm Session 7: Work in Progress (WIP) Session Chair: Krishna Sankar, Cisco Systems Scheduled topics: • Experiences Securing DNS through the Handle System (ppt) • Sam Sun, CNRI • International Grid Trust Federation: How to Build Trust Across the Global Grid • Michael Helm, ESnet Berkeley Lab (ppt) • Doug Olson, Lawrence Berkeley National Lab (ppt) • Suite B Enablement in TLS: A Report on Interoperability Testing Between Sun, RedHat, and Microsoft (ppt) • Vipul Gupta, Sun • Robert Relyea, RedHat • Kelvin Yiu, Microsoft Impromptu Rump Session (Sign-ups will be taken prior to the WIP by Jason Holt) • PKCS11 integration with Mac OS X keychain - Ron DiNapoli, Cornell (pdf) • Abuse: Towards Usefully Secure Email – Chris Masone, Dartmouth • Mobile Phones as Secure Containers – Anders Rundgren, RSA Labs (ppt) • Does an offline CA make sense – David Cooper, NIST (ppt) 4:00 pm - 5:30 pm Session 8: Panel - Browser Security User Interfaces Why are web security decisions hard and what can we do about it? Panel Moderator: Jason Holt, Brigham Young University Combined presentation: (ppt) Panel members Amir Herzberg, Bar Llan University Frank Hecker, Mozilla Foundation Sean Smith, Dartmouth University George Staikos, KDE Kelvin Yiu, Microsoft Thursday, April 6, 2006 9:00 am - 9:30 am Session 9: PKI in Higher Education Session Chair: Eric Norman, University of Wisconsin CAUDIT PKI Federation - A Higher Education Sector Wide Approach (paper: pdf; presentation: pdf) Viviani Paz, Australian Computer Emergency Response Team Rodney McDuff, The University of Queensland 9:30 am - 10:45 am Session 10: Panel - Federal PKI Update Panel Moderator - Peter Alterman, National Institutes of Health Panelists Judy Spencer, General Services Administration (ppt) David Cooper, NIST (pdf) 11:15 am - 12:30 pm Session 11: Panel - Bridge to Bridge Interoperations Panel Moderator - Peter Alterman, National Institutes of Health (ppt) Panelists Debb Blanchard, Cybertrust (ppt) Santosh Chokhani, Orion Security Systems, Inc. (ppt) Scott Rea, Dartmouth College (ppt) 12:30 pm - 12:45 pm Wrap up 5th Annual PKI R&D Workshop Summary Ben Chinowsky, Internet2 Note: this summary is organized topically rather than chronologically. See http://middleware.internet2.edu/pki06/proceedings/ for the workshop program, with links to papers and presentations. The workshop addressed its theme of "making PKI easy to use" from three angles: how much to expect from the user, and how to design accordingly; PKI and the DNS (DKIM and DNSSEC in particular); and deployment experiences. There were also some additional talks not directly related to the workshop theme. What's reasonable to expect of the users? How to design around what it's not reasonable to expect of them? Angela Sasse keynoted with a talk titled Has Johnny Learnt To Encrypt By Now? The short answer is "no", for reasons that haven't changed since Alma Whitten posed the question at PKI03: security is complex and unlike anything else users have to deal with, and people aren't properly motivated to use it. Much of Sasse's talk counterposed her approach to solving these problems to Whitten's. The overarching difference in approach to solution is Sasse's skepticism that users can learn all they'd need to in order for Whitten's approach to be successful. Sasse cited Eric Norman's "Top 10" (actually more than that) list of things that users would need to learn to use a typical PKI implementation. Whitten's own research suggests users would need a day and a half of training to get started; for many organizations this is too long. Sasse's approach to these problems overlaps with Whitten's, but with marked differences of emphasis. Sasse favors: designing a "socio-technical system", not just a user interface. In particular, Sasse advocates "design to secure things people care about", citing Felten & Friedman's work on "value-sensitive" design. more emphasis on simplifying systems, and less emphasis on teaching users to understand complex systems. automating security, rather than keeping it visible. One example of this approach is to find better names for things. Sasse laid great stress on the need to find better words for the concepts users will still need to learn; for example, the meanings of "key", "public", and "private" in PKI are completely different from their meanings in everyday life. Sasse also cited Garfinkel & Miller's work on Key Continuity Management, which makes heavy use of colorcoding (see http://groups.csail.mit.edu/uid/projects/secure-email/), and approvingly cited Bruce Schneier's work for its focus on "business and social constraints". In the discussion following this session, the group greatly extended the analogy between driving and computer security that Eric Norman had used to introduce the "Top 10" list cited by Sasse. Is requiring users to understand the basic concepts of public key cryptography more like requiring them to know how the engine works (avoidable and bad) or more like requiring them to know the rules of the road (unavoidable and good)? Sasse suggested propounding "simple but strong" rules, like "never externalize your password in any way". She also suggested that Whitten's "safe staging" idea has some promise. Sasse strongly advocates risk analysis, in particular to see where security measures shift risks. For example, similarly to the way that car alarms lead to carjackings (instead of being able to hot-wire the vehicle, the attacker now needs to get the keys), biometrics have led to attackers chopping off fingers. Sasse also agreed with David Wasley's comment that the user needs to know at least a little in order to cope when things go wrong - like the driver knowing what the symptoms of underinflated tires are. Usability Panel Discussions There were two usability panels, one on digital signatures and the other on browsers. In the digital signatures panel, Ron DiNapoli asked if the Kerberos KClient common interface could serve as a model. He argued that a unified interface makes things much simpler, and from this standpoint gave an optimistic assessment of PDF signing and encryption support. Anders Rundgren discussed webform signing, which is already used by millions in Europe, largely for citizen-to-government transactions. However, the systems used are proprietary and non-interoperable, so Rundgren is launching the WASP (Web Activated Signature Protocol) standards proposal in cooperation with five groups in Europe. The WASP use cases all stem from efforts to increase usage of e- government. Sandhu discussed prospects for transaction signatures, as vs. document signatures - addressing the many potential applications in which there are many transactions requiring only a modest level of assurance, instead of a few transactions requiring high assurance. One key difference is that where document signatures are generally human-verified, transaction signatures are verified by a computer, "with possibly human audit and recourse forensics". Both Rundgren and Sandhu noted the Outlook Express "Security Warning" black screen as a particularly egregious example of how not to design a user interface for email security. In the discussion, Rich Guida stressed the importance of asking "Is it better than the way we do it now?" Guida suggested that even with their imperfections, any of the signing mechanisms presented in the panel would be better than paper- based signature processes like signing every line of a form. Guida noted that SAFE (http://www.safe-biopharma.org) is working on a universal signing interface. One of the project contractors has developed an approach to verifying historical digital signatures, based on retrieving historical CRLs. This sparked controversy about record-retention issues more generally. David Chadwick argued that efforts to develop trusted timestamping standards for verifying digital signatures are "a complete waste of time", with the exception of one-party signing situations, like a will. Otherwise, the two parties can always put time fields in the signed documents, and the recipient can use this information as part of the process of deciding if the signature is good. Chadwick said that to expect a relying party to trust you to (for example) pay an invoice for goods received, but not trust you to be able to tell the time correctly, seems like a rather strange trust model. Peter Hesse noted signing of lab notebooks to back patent claims as another example of one-party signing. Sandhu argued that record retention will clearly not be a killer app for digital signatures, and expressed surprise that it had dominated the discussion; he stressed the need to look at the application requirements and let that drive the discussion. Hesse brought this back around to "is it better than paper?", which can't prove when it was signed and doesn't need to; he also suggested that "are we overengineering?" is a valid question here. Amir Herzberg, Frank Hecker, Sean Smith, George Staikos, and Kelvin Yiu gave a joint presentation on browser security user interfaces, moderated by Jason Holt. Particularly noteworthy in their slides was a good assortment of bad examples. Holt noted that a common element of these is that the user doesn't know what they need to know in order to quantify the risk involved. Herzberg made two suggestions for improvement: a mechanism that would let you choose a certificate validation service that you trust, like you choose antivirus software; and "public-protest-period certificates", for which the certificate request would be published for a time before the certificate is issued, in order to give the targets of misleading certificate requests an opportunity to object. Herzberg also argued that security indicators should always go in the graphical elements of the browser itself (the browser "chrome"), not in the page content. The discussion centered around the need for browser and web site designers to get guidance on how to handle the naive user. Holt noted that there doesn't seem to be any documentation of best practices for secure web site developers, and suggested that the PKI community might be well suited to produce such documentation. Hecker noted that the Mozilla Foundation may have grant funds available for the development of best practices documents. Sean Smith noted a recent paper titled "Why Phishing Works"; see http://people.deas.harvard.edu/~rachna/. Herzberg suggested that the long-term solution for the naive user will be a "secure browsing mode". James Fisher suggested that developers need guidelines for naive users similar to those developed for sight-impaired users; David Wasley suggested "a UL Labs for software," offering certification that user interfaces are no more complex than necessary. Sean Geddis argued that security should be built into the operating system, and the applications should be forced to acquire the appropriate credentials. There was general agreement that while this is true in principle, the amount of cooperation it requires from application developers is not forthcoming, so it's not going to happen. There was also a short demonstration of the security user interface in Internet Explorer 7, which uses red-yellow-green colorcoding. Holt summed up the discussion by stressing the need to compile best practices to guide development of secure browsers and web sites. Easy-to-Use Deployment Architectures Stephen Chan described work at NERSC on Simplifying Credential Management through Online Certificate Authorities and PAM. The paper and presentation include a useful list of PKI "de-motivators" and the ways in which they are addressed by using short-lived certificates and having users authenticate with PAM (Pluggable Authentication Modules). Chan noted that most of the code from this project is freely available upon request. Von Welch provided an overview of the Globus Toolkit, Shibboleth, GridShib, and MyProxy. The Globus Toolkit (http://www.globus.org/toolkit/) is Globus' core Grid software; Shibboleth (http://shibboleth.internet2.edu) is the Internet2 Middleware Initiative's flagship federating software. GridShib (http://gridshib.globus.org) adds Globus Toolkit and Shibboleth plugins to enable Shibboleth Identity Provider data to be used for Grid access control decisions. MyProxy (http://grid.ncsa.uiuc.edu/myproxy/) is a credential repository and CA that greatly reduces the pain involved in acquiring credentials to run Grid jobs. Work on integrating GridShib and MyProxy is ongoing. Jon Olnes discussed PKI Interoperability by an Independent, Trusted Validation Authority. This approach aims to lessen the complexity faced by relying parties. A Validation Authority (VA) is "an independent trust anchor" - CAs do not delegate trust to a VA; rather the VA offers validation services directly to the relying parties. Olnes's employer, DNV, describes itself as "a leading international provider of services for managing risk", among other things certifying the seaworthiness of ships and the management processes of corporations. Offering VA services is how DNV plans to expand this role into the area of "digital value chains". The idea of a VA was well received by the group; one attendee described it as "perhaps the most important solution the PKI community has been missing". A deployment is planned for this summer. PKI and the DNS IETF DKIM Working Group co-chair Barry Leiba moderated a panel discussion on Domain Keys Identified Mail (DKIM). After asking for a show of hands that revealed that few in the room were familiar with the technology, Jim Fenton gave an Introduction to DKIM. DKIM is a way for an email domain to take responsibility for sending an email message. The central goal of DKIM is to stop email spoofing; its central concepts are 1) key distribution via DNS ("a useful pseudo-PKI for DKIM"), 2) using raw keys, with 3) signatures representing the domain, not the author. Tim Polk discussed DKIM Seen Through a PKIX-Focused Lens; he noted that "DNS poisoning is not that difficult, it just isn't that interesting in most cases. DKIM makes it interesting." Nonetheless, Polk argued that from a spam-mitigation standpoint DKIM is much better than nothing, and that the incentive it provides to attack the DNS may in turn drive DNSSEC deployment. Polk also noted that DKIM is extensible to other key-fetching services, and suggested that these services include one based on X.509. In the discussion, there was strong approval of the concept of DKIM as a good foundation to build on, rather than a complete solution. Leiba noted that DKIM is good for whitelisting, not blacklisting. Neal McBurnett suggested that the semantics of a DKIM signature are basically "I [the domain] am willing to be punished if this is bad"; Leiba said that it's more like "I acknowledge that I put this on the Internet". Different signers will have different interpretations of exactly what that means; some people want more clarity in the interpretation, and that complicates things. Phillip Hallam-Baker expects the DKIM standard to provide a flag to say "all messages from this domain should be signed"; in his view, giving potential signers confidence that signing will make a message more likely to get through - in particular that it will be less likely to get flagged as spam - will be key to DKIM uptake. Also, in response to questions from Chadwick, Hallam-Baker agreed that DKIM is just as susceptible to bad client design as S/MIME, and relies just as strongly as any PKI on CAs not permitting lookalike domains. There was strong general agreement that widespread DKIM deployment would mean that a lot more would be riding on the success or failure of attempts to secure the DNS. More on DKIM is at http://mipassoc.org/dkim/. Noting the need to raise our sights from the goal of mere "usability", Phillip Hallam-Baker offered an approach to Achieving Email Security Luxury, relying centrally on DKIM. Hallam-Baker wants to have a security interface as compelling as a video game - if we aim high, maybe we'll hit higher than we would by aiming lower. First among his requirements is to avoid the assumption that users want to become computer experts. Some development of expertise among the users will nonetheless be needed; here Hallam-Baker stressed the importance of providing education ("empowerment"), and not just training ("mere instruction"). Hallam-Baker's software solution relies centrally on the power of branding. This solution uses DKIM and the PKIX LogoType extension to implement "Secure Internet Letterhead" - verified mail will display the logo of the sender and (upon request) the logo of the verifier, in the "chrome" of the email client. The use of DNS to distribute keys improves the chances of rapid deployment. Other than DKIM, all components of this solution have been standardized; DKIM is currently being standardized in IETF (see http://www.ietf.org/html.charters/dkim-charter.html). A prominent theme in Hallam-Baker's talk (as well as Welch's and Chan's Grid presentations) was that most of the things we need to architect an easy-to-use PKI are already available - it's largely a matter of putting existing components together in new ways. Allison Mankin presented an update on Trust Infrastructure and DNSSEC Deployment. Attacks on the DNS are usually not well publicized; http://www.dnssec-deployment.org has details on recent attacks. Mankin noted that the major costs of DNSSEC deployment are in training, operation, and key management, not computing and network resources. More cost-benefit analysis is needed. Operating system, firewall, and application support for DNSSEC still needs work, and an extension to prevent zone-walking is still in development, but Mankin strongly advocates deploying pieces as soon as they're ready. She was seconded in this view by Hallam-Baker, who pointed out that SSL - the only implementation of public-key cryptography to deploy widely - had serious flaws when deployment first got under way. Deployments In his opening remarks for the workshop, Ken Klingenstein observed that the PKI community is currently engaged in working from the bottom up, building "pockets" of functioning infrastructure. One new pocket is the CAUDIT PKI for higher education in Australia; Viviani Paz provided an overview. Four levels of assurance are offered, depending on the strength of the proofs of identity provided by a prospective certificate holder. Of particular note is the points system the CAUDIT PKI uses for identity proofing (e.g., a passport is worth 70 points, a driver's licence only 40 points); this system is based on the laws governing financial transaction reporting in Australia. CAUDIT is taking a phased approach to deployment; the pilot phase has concluded and the pre- production phase is underway. One of the largest existing pockets of deployment is the US Federal PKI. Peter Alterman gave an update and moderated a panel on developments in this area. Thirteen Federal entities are currently cross-certified; further information is available at http://www.cio.gov/fpkipa/. David Cooper discussed developments in the Path Discovery and Verification Working Group of the FBCA (see http://www.cio.gov/fbca/pdvalwg.htm). A path discovery test suite is under development. Judy Spencer explored The Role of Federal PKI in compliance with Homeland Security Presidential Directive 12. HSPD-12 is titled "Policy for a Common Identification Standard for Federal Employees and Contractors". PKI and smartcards are central to the implementation, as are new processes for personal identity verification; one major change will be requiring government contractors to pass the same background checks as government employees. See http://csrc.nist.gov/piv-project/ and http://www.cio.gov/ficc/. There were also reports on steady though incremental progress in building corridors among these and other pockets. Alterman moderated a panel on Bridge-to-Bridge Interoperability; he observed that cross-certification among bridges has the potential to greatly expand the reach of PKI. Debb Blanchard provided an overview of the Bridge-to-Bridge Working Group. The BBWG was launched to address issues around the FBCA cross-certifying with other bridges such as HEBCA, but has since broadened its scope to BCAs more generally. A fundamental principle for the BBWG is that no transitive trust is allowed across bridges. This point was also stressed by Santosh Chokhani in his talk on Technical Considerations for Bridge-to-Bridge Interoperability: trust is bilateral like business relationships; it cannot be transitive across bridges. Finally, Scott Rea updated the group on PKI in higher education and progress toward HEBCA deployment. The key uses he sees for PKI in higher education are S/MIME, paperless workflow, Shibboleth, federated Grid, and e-grants. Because higher education gets so much federal funding, FBCA is the primary target for HEBCA cross-certification. A prototype is operational, and from a purely technical standpoint, HEBCA has been ready to launch for several months; watch http://www.educause.edu/hebca/. Snags in the standards process can prevent us from getting as far as we might have in building and interconnecting pockets of PKI. David Chadwick explored How Trust Had a Hole Blown In It: The Case of X.509 Name Constraints. For ten years ISO/ITU-T and IETF PKIX have failed to bring their interpretations of name constraints into alignment. Chadwick argued that imprecision in the base standard led to misunderstanding of the original intentions behind name constraints, and that both sides have been slow to rectify these misunderstandings. His talk was followed by a spirited discussion which included several of the individuals involved in the history recounted by Chadwick, disagreeing with his account of that history, the current seriousness of the problem, and the best way to fix it. Other topics Bill Burr presented a comprehensive NIST Cryptographic Standards Status Report. NIST's current focus is getting Federal users off of 80-bit equivalent cryptography (e.g. 1024-bit RSA & DSA) by 2010. There are complex patent issues with elliptic-curve cryptography (ECC); Burr was asked whether ECC provides enough performance improvement at real-world keylengths to make it worth the uncertainty around patents. Burr responded that as a part of the Department of Commerce, which also includes the Patent and Trademark Office, NIST cannot discriminate against technologies based on patent status; he also expects Windows Vista to make ECC more widely available. Burr said that he is now 98% sure that there will be a NIST competition for a replacement for SHA. Jeffrey Altman gave an overview of the state of the art in Integrating PKI and Kerberos. PK-INIT, a means of using a certificate to get a Kerberos ticket, is the most well-established project, but there are also PK-APP (KX.509 - using Kerberos to get a cert) and PK-CROSS (using certs for inter-domain Kerberos). Altman recommends that deployment efforts focus on reducing the number of credentials that users have to worry about. There were two presentations on revocation. Santosh Chokhani presented Marine Corps-funded work on Navigating Revocation through Eternal Loops. Chokhani presented various options for dealing with the problem of the circular dependencies in revocation that can be created by self-issued certificates. Chokhani noted that he's not advocating any of these options over the others, rather saying "if you pick your poison, here's your antidote." Kelvin Yiu, lead Program Manager for Microsoft Windows security, discussed Enabling Revocation for Billions of Consumers, with a focus on revocation in Windows Vista. Internet Explorer 7 in Vista will enable revocation checking by default. Yiu explored various lessons learned and tradeoffs between usability and getting the large downloads required. Yiu's slides include a list of best- practice recommendations to the industry, headed by "Use HTTP, not LDAP". There were four short work-in-progress presentations. Sam Sun presented Experiences Securing DNS through the Handle System. Plans for the software include an open-source release, deployment in the .cn TLD registry, and using it to support ENUM service. Michael Helm presented an overview of the International Grid Trust Federation. The IGTF is composed of three Policy Management Authorities covering the Americas, Europe, and the Asia/Pacific region; see http://www.gridpma.org. Helm noted the January 2006 launch of the European Commission's E-Infrastructure Shared Between Europe and Latin America (EELA) project to support Grid development in Latin America. The Large Hadron Collider (LHC) is a major driver for the IGTF. Doug Olson discussed PKI in the Open Science Grid. OSG (http://www.opensciencegrid.org) is also heavily focused on the LHC, as well as virtual-organization support. OSG uses the NSF Middleware Initiative distribution as its core software. Both Helm and Olson cited making PKI more usable for less technical users as a major issue. Robert Relyea and Kelvin Yiu presented Suite B Enablement in TLS: A Report on Interoperability Testing Between Sun, RedHat, and Microsoft. Suite B is an NSA standard for elliptic-curve cryptography (ECC); see http://www.nsa.gov/ia/industry/crypto_suite_b.cfm. Bill Burr noted that while NIST is not mandating ECC, they are advocating it. Burr also remarked that if you want to use ECC anywhere, you want to use it on smartcards. The WIP session concluded with a "rump session" in which presenters were given three minutes each for impromptu presentations. Ron DiNapoli explained the motivation for, and gave a very short demonstration of, his work on Integrating PKCS-11 with Apple Keychain Services. Chris Masone, a student of Sean Smith, set out the early stages of his work on Attribute Based, Usefully Secure Email (ABUSE), using shortlived credentials. Anders Rundgren outlined his work on WS-Mobile, a scheme for using cellphones to replace smartcards. Finally, David Cooper of NIST posed the question, Are Offline Root CAs worth it? - not offering an answer but providing a useful rundown on the pros and cons. Conclusion PKI06 further solidified the consensus from PKI04 and PKI05: "Understanding and educating users is centrally important" and "The specifics of any particular PKI deployment should be driven by real needs, and should be only as heavyweight as necessary." PKI06 also filled out this consensus with further examples and experiences. With respect to experiences, there was strong interest in expanding the work-in-progress and rump-session components of future workshops. There was also increased interest in documenting best practices for industry to use in implementing the PKI0x consensus. PKI06 was well attended, setting an all-time attendance record for the workshop series. Program Committee Chair Kent Seamons pointed out that although the number of technical paper submissions was quite low this year, the peer review process was rigorous and the acceptance rate was comparable to that in previous years. As had been recommended by attendees at previous workshops, this year's program had many more invited talks and panel discussions; this change was well received at PKI06. The organizers will make a concerted effort to increase the number of technical paper submissions in the future. PKI07 will focus on applications. Please join us at NIST, April 17-19, 2007. PKI 2006 Making PKI Easy to Use Kent Seamons Program Chair 5th Annual PKI R&D Workshop NIST, Gaithersburg, MD April 3-6, 2006 Program Committee  Kent Seamons, Brigham Young University  Neal McBurnett, Internet2 (chair)  Clifford Neuman, University of Southern  Peter Alterman, National Institutes of Health California  Stefan Brands, Credentica and McGill  Eric Norman, University of Wisconsin University  Tim Polk, NIST  Bill Burr, NIST  Ravi Sandhu, George Mason University and  David Chadwick, University of Kent TriCipher  Yassir Elley, Forum Systems  Krishna Sankar, Cisco Systems  Carl Ellison, Microsoft  Frank Siebenlist, Argonne National  Stephen Farrell, Trinity College Dublin Laboratory Thank You!  Richard Guida, Johnson & Johnson  Sean Smith, Dartmouth College  Jason Holt, Brigham Young University  Von Welch, NCSA  Russ Housley, Vigil Security, LLC  Stephen Whitlock, Boeing  Ken Klingenstein, Internet2  Michael Wiener, Cryptographic Clarity  Neal McBurnett, Internet2  William Winsborough, University of Texas at San Antonio  Ken Klingenstein, Internet2 Special Thanks  Neal McBurnett, Internet2  Sara Caswell, NIST Technical Program Process  The number of submissions was down this year, but the quality was good  Acceptance rate in line with past years  Each paper received 4+ reviews  Some papers received shepherding  Thank you authors and PC  More panels and invited speakers this year as requested by past attendees Last Minute Instructions  Speakers please contact your session chairs in advance  At the beginning of the break before your session  An electronic copy of each presentation should be given to Neal for the web site (ppt or pdf)  Work-In-Progress Session on Wed afternoon  Will include a rump session – 5 minute limit  Contact: Jason Holt  Informal Birds of a Feather sessions can be held Wed evening Looking to the Future  Please make plans now to submit a technical paper next year  Complete a survey at the conclusion of the workshop – your feedback is important to us! Enjoy the Workshop  The success of the workshop is in your hands  Participate! Has Johnny learnt to encrypt by now? Examining the troubled relationship between a security solution and its users M. Angela Sasse Professor of Human-Centred Technology Department of Computer Science University College London, UK a.sasse@cs.ucl.ac.uk www.ucl.cs.ac.uk/staff/A.Sasse 5th Annual PKI R&D Workshop 2006 Overview 1. Usable security • History – Johnny & the Enemies • A framework for thinking about security 2. Usable encryption: what has been tried, and how successful was it 3. Concluding thoughts • Who has to learn what • Possible technology pieces towards a solution 5th Annual PKI R&D Workshop 2006 Two of three “classics” re-printed in recent book Security and Usability: Designing secure system that people can use. Edited by Lorrie Faith Cranor & Simpson Garfinkel. O’Reilly 2005. 5th Annual PKI R&D Workshop 2006 “Why Johnny Can’t Encrypt” • Whitten & Tygar, Procs USENIX 1999 • Graphical user interface to PGP 5.0 • Even after detailed introduction, only 3 out of 12 participants could encrypt their email successfully • Need more than a pretty face: graphical ≠ usable • Problems: 1. User tasks not represented 2. Misleading labels 3. Lack of feedback 5th Annual PKI R&D Workshop 2006 “Users are Not The Enemy” • Adams & Sasse, CommACM 1999 • Many users’ knowledge about security is inadequate • Users will shortcut security mechanisms that get in the way of their goals/tasks • Security policies often make impossible demands of users • Users lose respect for security, downward spiral in behaviour 5th Annual PKI R&D Workshop 2006 How do we design a usable system • Consider users and their characteristics – Minimize physical and mental workload • Consider users’ goals and tasks – Functionality must support these, user interface must signpost in those terms – Conflicting goal structures are always bad news • Consider context of use – Physical and social environment 5th Annual PKI R&D Workshop 2006 Example: passwords 5th Annual PKI R&D Workshop 2006 The path to usable security (according to Whitten, 2004) “… the usability problem for security is difficult to solve precisely because security presents qualitatively different types of usability challenges from those of other types of software […] making security usable will require the creation of user interface design methods that address those challenges.” 5th Annual PKI R&D Workshop 2006 The path to usable security (according to Sasse et al., 2001) • Most security mechanisms are downright unusable – apply key usability principles • Identify users and relevant characteristics • Minimize their physical & mental workload • Security is an enabling task, so fit in with production tasks and context of use – Policies and mechanisms • When extra effort is needed, educate and motivate 5th Annual PKI R&D Workshop 2006 What we agreed on … and what not • Development of usable • Designing a user interface security systems is similar vs. designing a socio- to safety-critical systems technical system development • Security UIs should • Security is a secondary prevent errors and teach goal for most users users about underlying • Underlying security security systems vs. systems are complex simplify underlying • Education and behaviour systems modification is needed • Security must remain visible vs. simplify & automate wherever possible 5th Annual PKI R&D Workshop 2006 A telling footnote … “… when presented with a software programme incorporating visible public key cryptography, users often complained during the first 10-15 minutes of the testing that they would expect ‘that sort of thing’ to be handled invisibly. As their exposure to the software continued and their understanding of the security mechanism grew, they generally ceased to make that complaint.” Alma Whitten’s thesis, 2004 5th Annual PKI R&D Workshop 2006 … but we only want what’s best for them! “There are significant benefits to supporting users in developing a certain base level in generalizable security knowledge. A user who knows that, regardless of what application is in use, one kind of tool protects the privacy of transmission, a second kind protects the integrity of transmission, and a third kind protects the access to local resources, is much more empowered than one who must start afresh with each application.” Alma Whitten’s thesis, 2004 5th Annual PKI R&D Workshop 2006 So … what would Johnny have to learn? The following lists were posted by Eric Norman (University of Wisconsin) to the Yahoo HCISec mailing group last year, and are reproduced with his kind permission 5th Annual PKI R&D Workshop 2006 “Those of us who grew up on the north side of Indianapolis have this thing for top 10 lists. At least one of us (me) believes the following: when it comes to PKI and security, users are going to have to learn something. I'm not sure just what that something is; I know it's not the mathematics of the RSA algorithm, but I believe that no matter what, there's something that they are just going to have to learn. It's like being able to drive down the concrete highway safely.” 5th Annual PKI R&D Workshop 2006 “You don't have to learn about spark plugs and distributors, but you do have to learn how to drive, something about what the signs mean, what lines painted on the road mean, and so forth. Nobody can do this for you; each user (driver) is going to have to learn it for themselves. In order to get a better handle on just what it is that folks are going to have to learn, I'm trying to come up with a top 10 list of things that must be learned. Here's what I have so far with some help from some other folks I know who are more technophiles than human factors people. There are two lists: one for users and the other for administrators, developers, etc.” 5th Annual PKI R&D Workshop 2006 Things PKI users to have to learn 1. How to import a trust anchor. 2. How to import a certificate. 3. How to protect your privates (private keys, that is). 4. How to apply for a certificate in your environment. 5. Why you shouldn't ignore PKI warnings. 6. How to interpret PKI error messages. 7. How to turn on digital signing. 8. How to install someone's public key in your address book. 9. How to get someone's public key. 10. How to export a certificate. 5th Annual PKI R&D Workshop 2006 … and 11. Risks of changing encryption keys. 12. How to interpret security icons in sundry browsers. 13. How to turn on encryption. 14. The difference between digital signatures and .signature files. 15. What happens if a key is revoked. 16. What does the little padlock really mean. 17. What does it mean to check the three boxes in Netscape/Mozilla? 18. What does "untrusted CA' mean in Netscape/Mozilla? 19. How to move and install certificates and private keys. 5th Annual PKI R&D Workshop 2006 Developers, administrators, etc. 1. What does the little padlock really mean. 2. How to properly configure mod_ssl. 3. How to move and install certificates and private keys. 4. What .pem, .cer, .crt, .der, .p12, .p7s, .p7c, .p7m, etc mean. 5. How to reformat PKI files. 6. How to enable client authentication during mod_ssl configuration, 7. How to dump BER formatted ASN.1 stuff. 8. How to manually follow a certificate chain. 9. The risks of configuring SSL stuff such that it automatically starts during reboot. 10. How to extract certificates from PKCS7 files, etc. 5th Annual PKI R&D Workshop 2006 … and 11. How to make PKCS12 files. 12. How to use the OpenSSL utilities. 13. What happens if a key is revoked. 5th Annual PKI R&D Workshop 2006 Can a nice UI make security tools easy to use? • Problem lies deeper: • “key” cues the wrong mental model • Meaning of “public” and “private” is different from everyday language • Underlying model too complex • Whitten produced tutorial on public key cryptography that take 1.5 days • Solutions? • Automatic en/decryption where encryption is needed • Simplify model/language 5th Annual PKI R&D Workshop 2006 Results from Grid security survey, 2005 • Survey of security issues in UK eScience (Grid) programme • Most frequently mentioned issue: certificates • Many users complained about effort involved in obtaining certificates, and complexity involved in using them 5th Annual PKI R&D Workshop 2006 How to get an eScience certificate 1. user gets notified they 6. local CA person releases need a certificate to use a to CA for specific Grid application machine, gives pw for 2. instruction sheet how to cert release get a certificate 7. person can download 3. point browser at National from CA via browser to CA local machine 4. CA sends notification to 8. export certificate from local CA browser to directory 5. go to local CA with proof where application will look of identity/authorization for it – what if user does not have one? 5th Annual PKI R&D Workshop 2006 • Obtaining a certificate was perceived to require too much time and effort; many projects would share certificates obtained by one project member to “make it worth it.” • Defense of security people: “People should regard it as the price of admission you have to pay for using the Grid.” 5th Annual PKI R&D Workshop 2006 Problems in using certificates • Certificate has to be stored in right application directory • Will not work on a different machine, but • … anyone using my machine can use it (not uncommon in Grid projects). • To users, it’s just another file on their computer – nothing that marks it out as something they should look after like a bank statement • Problems understanding terminology – “doesn’t work like a key” – “there is no such thing as half a secret” 5th Annual PKI R&D Workshop 2006 Ironic twist … • Users actually have security requirements – and one of them is availability of their data • Terrified that key and/or certificates stop working 5th Annual PKI R&D Workshop 2006 Security metaphors • Metaphors used by security experts as shorthand for communicating with each other do not work for wider audience • “key” cues the wrong mental model – do not behave like locks and keys • Meaning of “public” and “private” is different from everyday language • Not clear why a “digitally signed” message = hasn’t been tampered with – most users think it means it is from who it says it is … 5th Annual PKI R&D Workshop 2006 Improving Johnny’s performance • Garfinkel & Miller: overhead of obtaining certificates is barrier to adoption • Solution: – Key Continuity Management (KCM) – Colour-coding messages according to whether message was signed, and whether signer was previously known. • Remaining problems: did not realise that encrypted ≠ secret if you send message to attacker 5th Annual PKI R&D Workshop 2006 It’s not just end-users who struggle … • Case studies with eScience (Grid) software developers identified – Many developers have difficulty understanding how to implement PKI – Tendency to avoid using PKI because it was seen to be too complex, and likely to put off potential users – Cost of implementation and operation considered too high • Zurko & Simon pointed out in 1996 that not only users, but developers & system managers struggle with complexity of security 5th Annual PKI R&D Workshop 2006 Even cryptographers can get it wrong … • In a recent paper, Yvo Desmedt describes his failure to encrypt a wireless link … and blames it all on network/system managers … • “… system managers do not understand the consequences of their actions and may not know of, for example, man-in-the-middle attacks or understand these correctly.” • Example of colleague whose encrypted link to firewall defaulted to un-encrypted when he briefly close the lid on his Powerbook … 5th Annual PKI R&D Workshop 2006 Many currently implementations are just – well – cheap … 5th Annual PKI R&D Workshop 2006 “Too many engineers consider cryptography to be a sort of magic security dust they can sprinkle over their hardware and software […].” “The fundamentals of cryptography are important, but far more important is how those fundamentals are implemented and used.” “Book after book presented complicated protocols for this or that, without any mention of the business and social constraints within which those protocols would have to work.” N. Ferguson & B. Schneier: Practical Cryptography, 2003 5th Annual PKI R&D Workshop 2006 “Humans are incapable of storing high-quality cryptographic keys, and the have unacceptable speed and accuracy when performing cryptographic operations. (They are also large, expensive to maintain, difficult to manage, and they pollute the environment.) It is astonishing that these devices continue to be manufactured and deployed. But they are sufficiently pervasive that we must design our protocols around their limitations.” [C. Kaufmann, R. Perlman & M. Speciner: Network Security] 5th Annual PKI R&D Workshop 2006 To get on the network today: • SSID: PKI2005 • WEP Key (HEX): 12E9CEA5381354FD6FE23234EA 5th Annual PKI R&D Workshop 2006 Summary (1) – lessons from Johnny 1. Make it as easy as possible for Johnny to do the right thing • minimize physical and mental workload, and • consider his goals and context of use. 2. If you want to educate Johnny • get your terminology in order first • Motivate him by linking securing things he cares about 3. Less complexity, more integration would help all users (not just Johnny). 5th Annual PKI R&D Workshop 2006 Summary (2) - strategy • Application solutions – S/MIME • Design to secure things people care about – Felten & Friedman’ value-based design – Secure delete • Better integration of encryption solutions • Better and faster administrative support • Technologies that might help – Shibboleth – probably – Token-based systems - maybe – Biometrics – maybe 5th Annual PKI R&D Workshop 2006 References • A. Adams & M. A. Sasse (1999): Users Are Not The Enemy: Why users compromise security mechanisms and how to take remedial measures. Communications of the ACM, 42 (12), pp. 40-46 December 1999. • Y. Desmedt (in press): Why some network protocols are so user-unfriendly. Security Protocols, Springer LNCS. • I. Flechais (2005): Designing Secure and Usable Systems. PhD Thesis, • Department of Computer Science, UCL. • N. Ferguson & B. Schneier (2003): Practical Cryptography. Wiley. • S. Garfinkel & R. C. Miller (2005): Johnny 2: A user test key continuity with S/MIME and Outlook Express. Procs. SOUPS 2005. • M. A. Sasse, S. Brostoff & D. Weirich (2001): Transforming the "weakest link": a human-computer interaction approach to usable and effective security. BT Technology Journal, Vol 19 (3) July 2001, pp. 122-131. • A. Whitten (2004): Making security usable. Doctoral thesis CMU-CS-04-135. • A. Whitten & D. Tygar (1999): Why Johnny can’t encrypt. Procs. USENIX 1999. • M. E. Zurko & R. T. Simon (1996): User-centered security. In Procs of New Security Paradigms Workshop, pp. 27 -- 33, 1996. How Trust Had a Hole Blown In It The Case of X.509 Name Constraints David Chadwick, University of Kent. d.w.chadwick@kent.ac.uk This paper tries to untangle the confusion Abstract surrounding the name constraints A different interpretation of the Name extension, and understand how we have Constraints extension to that intended by got into the situation we are in today, ISO/ITU-T in its 1997 edition of X.509, where the X.509 standard and the RFC was made by the IETF PKIX group in its 3280 profile [5] disagree about both the certificate profile (RFC 2459). This has syntax and the semantics of this led to conflicting implementations and extension. This paper then poses the misalignment of the standard and its question, “Where do we go from here?”. profile. This paper reviews the history of This is still an unanswered question, but the Name Constraints extension, and how some possibilities are suggested in the it has evolved to the present day from an final section of this paper. This will no original concept first described in doubt provoke some further discussion of Privacy Enhanced Mail. The paper the problem both within the standards concludes by suggesting possible ways settings groups and with implementers, forward to resolve this unfortunate and this might help to draw this conflict. misalignment to a successful conclusion. 1. Introduction This paper has been written mostly from the documents (standards and draft The name constraints extension in X.509 standards) published during the last 12 was first introduced in the 1997 edition of years, but also partly from the memories X.509 [2]. But its history goes back of those working in this area at the time further than that, back in fact to the early [9]. It therefore could contain errors in 1990’s and Privacy Enhanced Mail the interpretation of what was actually (PEM) [1]. The extension has evolved published. However it is a best efforts over time since its first introduction, and, attempt at trying to understand how the due to lack of precision in the original current problem has arisen. It also X.509 definition, varying interpretations provides an interesting historical case of its meaning have evolved. This has study of the standardisation process now led to a divergence between the which shows how original intentions Internet PKIX group’s profile of X.509 evolve with time, but due to imprecise [3] and the latest edition of the X.509 specifications, and a lack of dialogue, standard [4, 8], which is about to be different conclusions about these merged and published as X.509 (2005). intentions are reached by different groups This matters, because some certificates of people. The contents of this paper are accepted as valid by one interpretation, as follows. Section 2 describes a will be treated as invalid by the other, and motivating example to show how and vice versa. when name constraints can be useful. Subsequent sections refer to this to show how it can (or cannot) be supported with Scenario 2. Suppose one of the the various flavours of name constraints organisations only wishes to trust a subset as it has evolved with time. Section 3 of the certificates issued to the employees provides a history of the early of another of the CAs, for example, to developments of the name constraints employees within the marketing extension, up until 2000. Section 4 department. This can be achieved by provides a more recent history of the using a name constraints DN of extension, from 2001 to the current date. {OU=marketing, o=X, c=GB}, Section 5 then concludes and suggests {OU=marketing, o=Y} or answers to the question “Where do we go {OU=marketing, o=Z, c=US} from here?” This might help to guide respectively. subsequent discussions on this topic. Scenario 3. Suppose a Bridge (or some 2. A Motivating Example (or other) CA exists that has cross certified two) each of the three organisational CAs, so Suppose organisations X, Y and Z all that it trusts all the certificates issued by operate CAs, with a DNs of {cn=CA, all of these CAs. Suppose however that o=X, c=GB}, {ou=admin, o=Y} and one of these organisational CAs wants to {o=Z, c=US}. Assume each issues limit the certificates that are deemed to be certificates to its employees, who all have trustworthy via the Bridge CA e.g. X only DNs under their respective organisational wants to trust certificates issued by Y to arcs {o=X, c=GB}. {o=Y} and {o=Z, its employees and not any certificates c=US}. Some of the CAs may also issue issued by Z. In this case, X issues a cross certificates to other people, e.g. certificate to the Bridge CA that has a contractors, subsidiaries, business name constraints of {o=Y}, with a partners etc. We assume that these are parameter to indicate that the first named under different arcs to those of certificate in the chain (that of the Bridge their employees. CA) is not to be bound by the name constraints rule. Scenario 1. Suppose that any two of these three organisations wish to cross 3. An Early History of Name certify each other, and constrain the Constraints (93-2000) certificates they wish to trust to only “Name constraints” was originally those issued to their employees. This is introduced as a concept to limit the X.509 easily achieved by placing a name certificates that could be issued to support constraints extension in each cross Privacy Enhancements for Internet certificate issued to X, Y or Z indicating Electronic Mail (PEM). As RFC 1422 that only certificates starting with a DN states below, the rationale was to try to of {o=X, c=GB}, {o=Y} or {o=Z, c=US} ensure that each CA only issued respectively will be trusted. Any other certificates containing globally unique certificates issued to contractors, business distinguished names, since this was a partners etc. will not be trusted, providing fundamental requirement of the X.500 their DNs are not in the employee’s name standard, of which X.509 was an integral space. part. RFC 1422 [1] states: X.509 was also being released. To complete the strategy for ensuring Unfortunately X.509(93) did not contain uniqueness of DNs, there is a DN any technical mechanism to indicate any subordination requirement levied on CAs. sort of constraints on the subject names In general, CAs are expected to sign that a CA could place in the V2 certificates only if the subject DN in the certificates that it issued. A CA could certificate is subordinate to the issuer issue a certificate with any valid subject (CA) DN. This ensures that certificates DN. Thus the PEM standard had to issued by a CA are syntactically ensure this constraint on subject names constrained to refer to subordinate through procedural means that were entities in the X.500 directory placed on the CA (by the above wording information tree (DIT), and this further in the PEM standard) and by a technical limits the possibility of duplicate DN requirement to check name subordination registration. during certificate path validation. Whilst these mechanisms are sufficient to There was much debate during this period enforce name subordination, they are about how globally unique distinguished very inflexible, since they can only cater names could be formed. Questions for Scenario 1 above (and not for 2 and 3) included: who would be the global since there is no information in the X.509 naming authorities; who would manage certificate to indicate how and when the root of the Directory Information Tree name subordination rules should be (DIT); and what would be the contents of applied (or not). Consequently, as soon as distinguished names, in terms of the X.509 (93) was released, work started on allowed attribute types and values? There defining the policy rules that could be were no conclusive answers to this debate placed inside certificates, in order to when the PEM RFCs were published, and allow much more flexibility in so PEM neatly sidestepped this issue for determining which certificates should be user certificates, by saying that they trusted. This work culminated in edition would be named subordinate to the names three of X.509, published in 1997 [2]. of the CAs, assuming that each CA would The primary work on edition three of have a globally unique name. This X.509 was the technical definition of the mindset tended to continue in the PKIX protocol elements inside certificates that working group in subsequent years, and would support the policies and still continues in some quarters today, procedures of a CA. This was achieved where some experts believe that a subject by adding extensions to the X.509 V2 DN can only be regarded as globally certificate format, to produce the V3 unique if it is assumed to be subordinate certificate format that we all use today. to, or used in conjunction with, the name (Since V3 certificates are infinitely of the issuing CA. This name extensible there has never been a subordination was never an assumption of requirement since 1997 to define a V4 X.500, which instead, required that each certificate format.) user DN would be globally unique in its own right. During the four years that it took to produce the 1997 edition of X.509, At the time the PEM standard was being several working drafts were produced. released, in 1993, the second edition of The name constraints extension was there from the outset, and its syntax and trusted subtrees. Since all constrained semantics remained constant until 1996. names were based on distinguished Annex 1 shows the name constraints names, there was no possibility that a definition in the output produced by the constrained certificate could contain Orlando meeting in December 1994 [6] other than a name in X.500 DN and the Ottawa meeting in 1995 [7]. The format. This feature ensured that only difference, shown by the underlined certificates issued to sub-contractors, text, was some more explanation of the business partners etc. who had meanings of the various fields added in different DNs would not be trusted 1995. One can see that a primary inadvertently. requirement was to satisfy PEM’s concerns to constrain which names a CA - Thirdly, not all certificates issued by could issue to its subjects, but also to add subordinate CAs need be constrained. greater flexibility in order to cater for all Two control mechanisms were three scenarios described above (and provided for the certifying CA to more!). There are three notable features specify which certificates did not fall of this definition. within the scope of the name constraints extension. The certifying - Firstly, the only name form that was CA could either specify a set of supported was the X.500 certificate policies to which this distinguished name (DN) and the way constraint applied, or could specify that a name space was constrained how many CAs in the chain should be was via the subtreeSpecification skipped before the constraint applied. directly imported from the X.501(93) This skipping mechanism allows us to standard. The subtreeSpecification cater for Scenario 3 above. allows any arbitrary DIT subtree to be defined, including chopped subtrees The net result of this extension was that which define branches of the top level the issuing (superior) CA could tightly subtree that are to be chopped off. control which (subject names in) (Note that X.501 allows filtered certificates issued by cross certified (disjoint) subtrees as well, but X.509 (subordinate) CAs should be trusted. Any stated that filtered subtrees should not relying party (RP) using the superior CA be permitted in name constraints). as its root of trust could be sure that The subtreeSpecification allows us to certificate path validation software would easily cater for Scenarios 1 and 2 not trust any certificate falling outside above. these name constraints. We thus had a watertight trust model. - Secondly, there were no loopholes. Any user certificate that did not fall Another extension was also being defined within the scope of a specified name during this period, entitled the subject constraint, should not to be regarded alternative name field. This extension as valid. The semantics of the defined “one or more alternative names, extension could therefore be stated as using any of a variety of name forms, for “every name that is not explicitly the entity that is bound by the CA to the trusted is untrusted” i.e. the name certified public key”. Several possible constraint specifies a white list of alternative name forms for the certificate subject were specified, including a DNS something of a Jekyll and Hyde life. name, an RFC822 email address and an Initially known in the PDAM [6] as the X.400 OR address. This extension CA or end entity indicator, it had underwent some growth during this virtually the same syntax and semantics period, starting out with just four as the basic constraints extension used alternative name forms and eventually today. It then grew in significance in the ending up with nine. Its intention was to DAM [7], when it changed its name to allow a certificate subject to have a basic constraints and added a simplified variety of names in different formats, name constraints capability to its syntax, because it was recognized in the mid specifically, the ability to specify the set 1990s that there was not going to be a of permitted subtrees in which all global X.500 directory service. If the subsequent certificate subject names X.509 standard could not cater for should fall. subjects with other name forms besides X.500 ones, then this would significantly Dramatic changes to the X.509 draft limit is scope and applicability. Thus standard occurred in April 1996 at the X.509 should support alternative name Geneva meeting, precipitated by amongst forms. In order to make the extension other things the Canadian national ballot fully extensible and able to cater for comment. The Canadian ballot comment future name forms that currently do not proposed three things: exist, the alternative name can also be an - to introduce the syntactic construct other name form, which is identified by a GeneralName, in order to group globally unique object identifier. Thus it together into one super-type all the is likely that a relying party might name forms in the subject alternative encounter a subject alternative name form name field that it is not able to recognize. In order to - to add further capability to basic cater for this, the definition of this constraints in two ways, firstly by extension included the text “a certificate- allowing denied subtrees as well as using system is permitted to ignore any permitted subtrees to be specified; name with an unrecognized or and secondly to replace the X.500 unsupported name form”. The implicit distinguished name type with the assumption was however, that this was an GeneralName super-type. alternative name for the subject, not a - to remove the name constraints replacement name, and the subject would extension since it was no longer always have an X.500 distinguished needed, as it main purpose was now name, even if it did not have an entry in usurped by the enhanced basic an X.500 directory service. We shall see constraints extension being proposed later that this ability to ignore in this ballot comment. unrecognized name forms probably indirectly led to the erosion of the trust The outcome of the resolution of the model built into name constraints. Canadian and other national ballot comments is well documented; it is the Yet another certificate extension that was 1997 edition of X.509 (see Annex 2). being defined through this period was the Precisely what technical discussions were one that eventually became known as the had in order to get there have now largely basic constraints extension. This had been forgotten with time, but several things are clear. The Canadian The following checks are applied to a introduction of the GeneralName super- certificate: type was accepted, and this was used to ….. specify the subject alternative name e) Check that the subject name is extension. The changes to basic within the name-space given by the value constraints were rejected, and this of permitted-subtrees and is not within extension reverted to its original 1994 the name-space given by the value of definition. However, the intention of the excluded-subtrees. ballot comment was accepted in principle, by modifying the name If any of the above checks fails, the constraints extension to match the procedure terminates, returning a failure proposed basic constraints extension. In indication and an appropriate reason other words, name constraints was code. modified by replacing the X.500 distinguished name type with the Unfortunately, when the GeneralName GeneralName super-type, and deleting syntax replaced the X.500 DN syntax in the policy and skip certs controls that the name constraints extension, it was not limited when the name constraints should as straightforward as simply replacing apply. The intention of name constraints one syntax with another. The text was still very clear, as stated in the first describing the name constraints extension sentence of the description “indicates a should have been significantly enhanced, name space within which all subject because new possibilities now existed names in subsequent certificates in a that did not before. Enhancements were certification path must be located”. It needed in a number of ways. Firstly, how can be seen that its purpose was to tightly was the name constraints extension to constrain the names that the subordinate handle general names that were not or cross certified CA could put into the hierarchically structured, such as IP subject field of the certificates that it addresses. How could one specify issued, and more than that, to constrain permitted and excluded subtrees for non- all additional subordinate CAs further hierarchical names? The answer was to along the certification path. Whereas the exclude these name forms from being original name constraints allowed certain applicable to this extension, as is groups of certificates to be specifically indicated by the text “only those name excluded, via the skipCerts and policySet forms that have a well-defined fields, the new definition did not. The hierarchical structure may be used in semantics were very definitely “every these fields”. Secondly, what was a name that is not explicitly trusted is relying party to do if there was a untrusted, with no exceptions”. In other mismatch between the various subject words, the original trust model still held alternative names in a certificate, and the true, but was even tighter than before, name constraints extension in the issuing because Scenario 3 can no longer be CA’s certificate? Several new supported. This tight trust model is possibilities now exist: (i) the subject’s further shown by the Certificate Path alternative names are a subset of the Processing Procedure in Section 12.4.3 of name forms listed in the CA’s name the 97 standard, which states: constraints; (ii) the subject’s alternative names are a superset of the name forms listed in the CA’s name constraints; (iii) As soon as X.509 (97) was published, the the subject’s alternative names intersect IETF PKIX group started to work on their with the name forms listed in the CA’s profile for X.509 public key certificates. name constraints; (iv) the subject’s The first version of this was published in alternative names do not overlap with the 1999 as RFC 2459 [3]. In an attempt to name forms listed in the CA’s name guide implementers in their coding, it had constraints; and (v) the subject’s to work out what the intended X.509 alternative names are identical to the semantics were when there was a name forms listed in the CA’s name mismatch between the name forms in a constraints. Unfortunately the standard is subject’s certificate and those in the name strangely quiet on this aspect. This is constraints extension of the issuing CA. clearly a bug. The fact that appropriate Therefore RFC 2459 added the following wording was not included to reflect the two critical sentences to its specification change of syntax can be seen from the “Restrictions apply only when the first sentence of the definition, which specified name form is present. If no continued to state “indicates a name name of the type is in the certificate, the space within which all subject names in certificate is acceptable.” Precisely why subsequent certificates”. In fact, with the these sentences were added is not known. introduction of General Names, it does It might have been a best efforts not indicate a single name space any interpretation of how the subject longer, but possibly many different name alternative names logic, that stated that spaces. How a relying party should unknown name forms could be safely behave when all these new possibilities ignored, applied to name constraints. On present themselves can be resolved in one the other hand it might have been a poor of two ways, either conjunctively or attempt at resolving mismatches between disjunctively. Conjunctive resolution name forms in subject names and name would require all the name forms in the constraints. certificate to match the specified name constraints, whereas disjunctive Unfortunately, and perhaps without resolution would require just one name realizing it, the RFC 2459 wording was form in the certificate to obey any one of also flawed in two ways. Firstly it does the name constraints. When this issue not explicitly cover all the five cases was recently debated on the X.500 listed above. Specifically what rule mailing list, the X.500 rapporteur stated should apply when the certificate “I considered (subject) alt names to be simultaneously has no name of the type truly alternate forms of the subject name specified in name constraints but also has in the certificate. That subject name had a name of the type specified in name to be within the scope of any name constraints (cases (ii) and (iii) above). constraints, if specified. If the subject Should it be trusted or not? But more name was in scope, the alternative name importantly, it has introduced a would be considered within scope. I don't potentially massive security hole in the think we, meaning the x.509 group, ever trust relationship between the superior considered what to do for any other CA issuing the certificate with a name conditions”. constraints extension and the subordinate (or cross certified) CA receiving it. In fact, it has completely reversed the X.509 trust model into one of “every name for several years. So much so that the form that is not explicitly untrusted is third edition of X.509 was published in trusted” i.e. name constraints now 2001 [4] with almost exactly the same become black lists rather than white lists. wording for the name constraints For example, referring to Scenario 1 extension as the 1997 edition. This lack above, where organization X cross of awareness is perhaps not that unusual, certifies organization Y, suppose that since RFC 2459 was only a profile of unknown to organization X, organization X.509, designed to give implementers Y’s CA is somewhat untrustworthy, or it recommendations on which options of simply changes its rules, and decides it X.509 to implement and which not to. It will issue certificates with other name was not meant to be redefining the logic forms as well as or instead of X.500 DNs, of X.509, and certainly not reversing it, for example RFC822 names. A user, although it might serve to further explain Freddie Fraudster (who may or may not the intended logic to implementers. be employed by Y), with the email Consequently the two critical sentences address nice.guy@cheap.goods.com of RFC 2459 were not added to the X.509 wants to obtain a certificate that will be standard. Whilst many companies had trusted by organization X’s CA, so it asks implemented the X.509 semantics, organization Y’s CA to issue him with a including Entrust, some companies had certificate containing only his email implemented the RFC 2459 reversed address. Using the RFC 2459 semantics semantics. In essence, the market place of “trust all except”, the certificate will be was in chaos. An attempt at reconciliation trusted by relying parties who have a root was attempted in late 2001 by the X.509 of trust in organization X’s CA. editor. This entailed a change of syntax However, using the X.509 “untrust all and semantics to the X.509 standard, so except” semantics, the certificate will not that it could capture both the “trust all be trusted. This reversal of semantics has except” (black list) and “untrust all now blown an unblockable hole in the except” (white list) semantics. The trust relationship between the two CAs. expectation (at least in some quarters) The reason is that the number of subject was that the proposed update of RFC alternative name forms is infinite, 2459 would adopt the new X.509 syntax through using the other name form and semantics. The change to X.509 was variant. Since it is impossible to list an published in October 2001 as a technical infinite number of name forms, it is corrigendum [8]. This is shown in Annex impossible to list all the name forms that 3. The update to RFC 2459 was published are trusted (according to RFC 2459) or in April 2002 as RFC 3280 [5]. Perhaps untrusted (according to X.509 (97)). Thus surprisingly, RFC 3280 contained exactly it is much safer for name constraints to the same text as RFC 2459 and made no contain white lists rather than black lists. attempt at profiling the revised version of X.509 which had attempted to resolve the 3. A Recent History of Name conflict. Constraints(2001-05) Despite its publication in January 1999, The important things to note about the the RFC 2459 trust hole and reversal of revised X.509 (2001) version are, the X.509 trust semantics, went largely - a new object identifier was allocated unnoticed by the X.509 standards body to the revised extension, so that the original name constraints extension 4. Conclusions and Way was no longer part of the X.509 Forward standard, This is clearly a sorry tale of continually - in an attempt to align with the changing syntaxes and semantics, reversed RFC semantics, the original misunderstandings between two standards syntax had the new “trust all except” creating bodies, the IETF and ISO/ITU-T, semantics applied to it, whilst the new a lack of communication and perhaps syntax had the original “untrust all even lethargy at dealing with issues in a except” semantics applied to it, timely manner. The obvious question to - the new syntax added a “required ask now is “where do we go from here”. name forms” field, with the semantics Clearly there are several possibilities. that each subsequent certificate in the This paper lists some of them, primarily chain “must include a subject name of from a technical perspective without at least one of the required name considering the commercial or political forms”. Thus disjunctive logic was implications of any one of them. The used to resolve the many possibilities other considerations that will also need to for mismatch between name forms in be taken into account when coming to a the certificate and name forms in the resolution of the problem, are trust and name constraints. usability, and how relying parties should - it still does not cater for Scenario 3 behave or adapt when they are presented since there is no way of skipping one with either of the trust paradigms “trust or more certificates in a certificate all except” and “untrust all except”. chain before the names constraints Different user communities may prefer takes effect. different trust paradigms. In summary, the various editions of Some of the different technical X.509 and their RFC profiles have possibilities envisaged by the author are: remained out of synchronisation over name constraints for all of their lifetimes, 1. The ITU-T/ISO X.509 group could with the latest version of X.509 (the 2001 accede to the RFC 3280bis design team’s corrigendum) and RFC 3280 being out of request, and revert the X.509 name synchronisation for the last 4 years. The constraints syntax to that of 1997 and situation has recently been brought to the 2001, whilst keeping the new “trust all attention of the X.509 standards except” (black list) semantics. A new community again, through the issuing of extension would then need to be defined defect report 314 by the RFC 3280bis that encapsulated the original “untrust all design team. This recommends that except” (white list) semantics, along with X.509 reverts to the original 1997 and the original exclusion control 2001 syntax but keeps the new “trust all mechanisms from the 94/95 drafts i.e. of except” (black list) semantics instead of specifying policy sets and certificate path its original “untrust all except” (white skipping that control which sets of list) semantics, and, in addition, X.509 certificates the constraint applies to. In should define a new certificate extension this case the IETF would need to do that will capture the original “untrust all nothing to its profile. Implementers who except” (white list) semantics. conform to the IETF semantics would not need to do anything unless and until the new “white list” extension is defined and “trust all except” which is too loose in its they decide to add it to their control capability, and open to abuse. implementations. The new extension is important to cater for Scenario 3. 4. Either of the standards bodies could . create a completely new certificate 2. The ITU-T/ISO X.509 group could extension with a more sophisticated revert to the 1997 and 2001 syntax and ASN.1 data type that could precisely original “untrust all except” (white list) specify which names are to be trusted and semantics and add additional clarifying which are not, and when in the certificate text to make clear that a disjunctive logic chain the constraint should comes into is used to resolve name form conflicts effect. For example, the extension could between the subject names and name contain a sequence of permitted, constraints. This would be in the spirit of excluded, and required name forms and the original extension, although it would their name spaces, along with a “Skip N fail to cater for Scenario 3 or those Certificates” parameter. This is the clean implementations that support the IETF sheet approach of taking the requirements semantics. In this case the IETF would and starting from scratch. I am not sure need to take this change of semantics into how successful this would be, given the account when revising RFC 3280, which current large installed user base. would mean deleting the two critical sentences that they added in RFC 2459. 5. Finally, the resolution could simply be ISO/ITU-T should then consider to do nothing to the latest X.509 syntax enhancing the extension, or creating a and semantics, since this allows both new one, so that it can cater for Scenario “trust all except” and “untrust all except” 3, which is an important use case to semantics to be specified. The IETF consider. PKIX group can then decide to either profile the original X.509 syntax, as they 3. A more dramatic solution might be to currently do, and keep their existing add an optional parameter (e.g. integer) to syntax and semantics, or migrate to the 1997 syntax with the semantics “don’t profiling the latest version of X.509. check (n) CA certificates”, in order to Since the IETF has been out of cater for Scenario 3. This would be synchronisation with the X.509 name similar to the skipCerts integer that was constraints extension ever since their first present in the 94/95 draft standard. Part RFC was published in 1999, being out of of the rationale given to the author for the synchronisation for another few years current RFC semantics, is so that end should not pose any significant problems entities and CAs could have different to them or to implementers. However, name forms, and then only the end entity their current approach to solving Scenario name forms would be constrained by the 3 type use cases is less than optimal. name constraints. In other words, to achieve Scenario 3 by using different In summary, what lessons have we learnt name forms for CAs and end users. The from this development? Clearly writing addition of a specific parameter which IT standards is hard, and perhaps writing indicates that this is what is required, is security standards is even harder. Even semantically better than the current though the editors try hard to remove method of reversing the trust semantics to ambiguities and incomplete specifications from standards, nevertheless they still Meeting on the Directory “Draft exist. Standards have bugs in them just Amendments DAM 4 to ISO/IEC 9594-2, like software, and just like software, you DAM 2 to ISO/IEC 9594-6, DAM 1 to don’t know what bugs are there until ISO/IEC 9594-7, and DAM 1 to ISO/IEC someone finds them. Cross fertilisation of 9594-8 on Certificate Extensions”, experts between base standards writers Ottawa, Canada, July 1995 and profile writers will clearly help [8] ITU-T. “Information technology – identify poor specifications, but this is not Open Systems Interconnection – The always practical given the constituencies Directory: Public-key and attribute of the two communities. Finally, given certificate frameworks Technical that we are human, errors will always Corrigendum 1”. Oct 2001. occur. The real test of human ingenuity [9] Private communications with Hoyt and adaptability is not that we never Kesterson (X.500 rapporteur), Warwick generate errors, but rather that we can Ford (national delegate), and Steve Kent resolve them effectively when they do (PKIX chair). occur. Sadly in this case we appear to have failed the test so far. Acknowledgements The author would like to thank Hoyt References Kesterson for providing the historical [1] S.Kent. “Privacy Enhancement for ISO/ITU-T documents on which this Internet Electronic Mail: Part II: paper is based. Certificate-Based Key Management. RFC 1422. February 1993. [2] ISO 9594-8/ITU-T Rec. X.509 (1997) The Directory: Authentication framework [3] R. Housley, W. Ford, W. Polk, D. Solo. “Internet X.509 Public Key Infrastructure Certificate and CRL Profile”. RFC 2459. January 1999. [4] ISO 9594-8/ITU-T Rec. X.509 (2001) The Directory: Public-key and attribute certificate frameworks [5] R. Housley, W. Polk, W. Ford, D. Solo “Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile”. RFC 3280, April 2002 [6] ISO/IEC JTC 1/SC 21 N9214 “Proposed Draft Amendments PDAM 4 to ISO/IEC 9594-2, PDAM 2 to ISO/IEC 9594-6, PDAM 1 to ISO/IEC 9594-7, and PDAM 1 to ISO/IEC 9594-8”, Orlando, USA, Dec 1994 [7] ISO/IEC JTC 1/SC 21/WG 4 and ITU-T Q15/7 Collaborative Editing Annex 1. The original PDAM Definition of Name Constraints 12.5.2.2 Name constraints field This field specifies a set of constraints with respect to the names for which subsequent CAs in a certification path may issue certificates. The following ASN.1 type defines this field: nameConstraints EXTENSION ::= { SYNTAX NameConstraintsSyntax IDENTIFIED BY { id-ce 11 } } NameConstraintsSyntax ::= SEQUENCE OF SEQUENCE { policySet [0] CertPolicySet OPTIONAL, -- If policySet is omitted, the constraints -- apply to all policies for which the -- certificate is applicable nameSpaceConstraint [1] NameSpaceConstraint OPTIONAL, nameSubordConstraint [2] NameSubordConstraint OPTIONAL } NameSpaceConstraint ::= SEQUENCE OF SubtreeSpecification (CONSTRAINED BY { -- specificationFilter is not permitted -- }) NameSubordConstraint ::= SEQUENCE { subordType ENUMERATED { subordinateToCA (0), subordinateToCAsSuperior (1) } DEFAULT subordinateToCAsSuperior, skipCerts INTEGER DEFAULT 0 } This extension is always critical. The fields are interpreted as follows: — policySet: This indicates those certificate policies to which the constraints apply. If this component is omitted, the constraints apply regardless of policy. — nameSpaceConstraint: If this constraint is present, a certificate issued by the subject CA of this certificate should only be considered valid if for a subject within one of the specified subtrees. Any subtree class specification may contain a chop specification; if there is no chop specification, a subtree is considered to extend to the leaves of the DIT. — nameSubordConstraint: This constraint is associated with a nominated CA in the certification path, being either the subject CA of this certificate or a CA which is the subject of a subsequent certificate in the certification path. If the value subordinateToCA is specified then, in all certificates in the certification path starting from a certificate issued by the nominated CA, the subject name must be subordinate to the issuer name of the same certificate. If the value subordinateToCAsSuperior is specified then, in all certificates in the certification path starting from a certificate issued by the nominated CA, the subject name must be subordinate to the name of the immediately superior DIT node of the issuer of the same certificate. The value of skipCerts indicates the number of certificates in the certification path to skip before the name subordination constraint takes effect; if value 0, the constraint starts to apply with certificates issued by the subject CA of this certificate. Notes 1 The name constraint capability provided through the subtreesConstraint field in the basic constraints extension may be adequate for many applications. The name constraints extension is an alternative which offers a more powerful range of constraining options, including the ability to fully reflect Internet Privacy Enhanced Mail [RFC 1422] rules. 2 The subordinateToCA alternative is provided only for compatibility with the Internet Privacy Enhanced Mail [RFC 1422] conventions. The subordinateToCAsSuperior rule is more powerful and its use is recommended in new infrastructures. Imported from X.501(93) SubtreeSpecification ::= SEQUENCE { base [0] LocalName DEFAULT { }, COMPONENTS OF ChopSpecification, specificationFilter [4] Refinement OPTIONAL } -- empty sequence specifies whole administrative area ChopSpecification ::= SEQUENCE { specificExclusions [1] SET OF CHOICE { chopBefore [0] LocalName, chopAfter [1] LocalName } OPTIONAL, minimum [2] BaseDistance DEFAULT 0, maximum [3] BaseDistance OPTIONAL} Annex 2. The X.509 (1997) Standard Definition of Name Constraints 12.4.2.2 Name constraints field This field, which shall be used only in a CA-certificate, indicates a name space within which all subject names in subsequent certificates in a certification path must be located. This field is defined as follows: nameConstraints EXTENSION ::= { SYNTAX NameConstraintsSyntax IDENTIFIED BY id-ce-nameConstraints } NameConstraintsSyntax ::= SEQUENCE { permittedSubtrees [0] GeneralSubtrees OPTIONAL, excludedSubtrees [1] GeneralSubtrees OPTIONAL } GeneralSubtrees ::= SEQUENCE SIZE (1..MAX) OF GeneralSubtree GeneralSubtree ::= SEQUENCE { base GeneralName, minimum [0] BaseDistance DEFAULT 0, maximum [1] BaseDistance OPTIONAL } BaseDistance ::= INTEGER (0..MAX) If present, the permittedSubtrees and excludedSubtrees components each specify one or more naming subtrees, each defined by the name of the root of the subtree and, optionally, within that subtree, an area that is bounded by upper and/or lower levels. If permittedSubtrees is present, of all the certificates issued by the subject CA and subsequent CAs in the certification path, only those certificates with subject names within these subtrees are acceptable. If excludedSubtrees is present, any certificate issued by the subject CA or subsequent CAs in the certification path that has a subject name within these subtrees is unacceptable. If both permittedSubtrees and excludedSubtrees are present and the name spaces overlap, the exclusion statement takes precedence. Of the name forms available through the GeneralName type, only those name forms that have a well-defined hierarchical structure may be used in these fields. The directoryName name form satisfies this requirement; when using this name form a naming subtree corresponds to a DIT subtree. Conformant implementations are not required to recognize all possible name forms. If the extension is flagged critical and a certificate-using implementation does not recognize a name form used in any base component, the certificate shall be handled as if an unrecognized critical extension had been encountered. If the extension is flagged non-critical and a certificate-using implementation does not recognize a name form used in any base component, then that subtree specification may be ignored. When a certificate subject has multiple names of the same name form (including, in the case of the directoryName name form, the name in the subject field of the certificate if non-null) then all such names shall be tested for consistency with a name constraint of that name form. NOTE — When testing certificate subject names for consistency with a name constraint, names in non-critical subject alternative name extensions should be processed, not ignored. The minimum field specifies the upper bound of the area within the subtree. All names whose final name component is above the level specified are not contained within the area. A value of minimum equal to zero (the default) corresponds to the base, i.e. the top node of the subtree. For example, if minimum is set to one, then the naming subtree excludes the base node but includes subordinate nodes. The maximum field specifies the lower bound of the area within the subtree. All names whose last component is below the level specified are not contained within the area. A value of maximum of zero corresponds to the base, i.e. the top of the subtree. An absent maximum component indicates that no lower limit should be imposed on the area within the subtree. For example, if maximum is set to one, then the naming subtree excludes all nodes except the subtree base and its immediate subordinates. This extension may, at the option of the certificate issuer, be either critical or non-critical. It is recommended that it be flagged critical, otherwise a certificate user may not check that subsequent certificates in a certification path are located in the name space intended by the issuing CA. If this extension is present and is flagged critical then a certificate-using system shall check that the certification path being processed is consistent with the value in this extension. From Section 12.3.2.1 GeneralNames ::= SEQUENCE SIZE (1..MAX) OF GeneralName GeneralName ::= CHOICE { otherName [0] INSTANCE OF OTHER-NAME, rfc822Name [1] IA5String, dNSName [2] IA5String, x400Address [3] ORAddress, directoryName [4] Name, ediPartyName [5] EDIPartyName, uniformResourceIdentifier [6] IA5String, iPAddress [7] OCTET STRING, registeredID [8] OBJECT IDENTIFIER } OTHER-NAME ::= TYPE-IDENTIFIER Annex 3. The 2001 Corrigendum Definition of Name Constraints 8.4.2.2Name constraints extension This field, which shall be used only in a CA-certificate, indicates a name space within which all subject names in subsequent certificates in a certification path must be located. This field is defined as follows: nameConstraints EXTENSION ::= { SYNTAX NameConstraintsSyntax IDENTIFIED BY id-ce-nameConstraint } NameConstraintsSyntax ::= SEQUENCE { permittedSubtrees [0] GeneralSubtrees OPTIONAL, excludedSubtrees [1] GeneralSubtrees OPTIONAL, requiredNameForms [2] NameForms OPTIONAL } GeneralSubtrees ::= SEQUENCE SIZE (1..MAX) OF GeneralSubtree GeneralSubtree ::= SEQUENCE { base GeneralName, minimum [0] BaseDistance DEFAULT 0, maximum [1] BaseDistance OPTIONAL } BaseDistance ::= INTEGER (0..MAX) NameForms ::= SEQUENCE { basicNameForms [0] BasicNameForms OPTIONAL, otherNameForms [1] SEQUENCE SIZE (1..MAX) OF OBJECT IDENTIFIER OPTIONAL } (ALL EXCEPT ({ -- none; i.e.: at least one component shall be present -- })) BasicNameForms ::= BIT STRING { rfc822Name (0), dNSName (1), x400Address (2), directoryName (3), ediPartyName (4), uniformResourceIdentifier (5), iPAddress (6), registeredID (7) } (SIZE (1..MAX)) If present, the permittedSubtrees and excludedSubtrees components each specify one or more naming subtrees, each defined by the name of the root of the subtree and optionally, within that subtree, an area that is bounded by upper and/or lower levels. If permittedSubtrees is present, subject names within these subtrees are acceptable. If excludedSubtrees is present, any certificate issued by the subject CA or subsequent CAs in the certification path that has a subject name within these subtrees is unacceptable. If both permittedSubtrees and excludedSubtrees are present and the name spaces overlap, the exclusion statement takes precedence for names within that overlap. If neither permitted nor excluded subtrees are specified for a name form, then any name within that name form is acceptable. If requiredNameForms is present, all subsequent certificates in the certification path must include a name of at least one of the required name forms. If permittedSubtrees is present, the following applies to all subsequent certificates in the path. If any certificate contains a subject name (in the subject field or subjectAltNames extension) of a name form for which permitted subtrees are specified, the name must fall within at least one of the specified subtrees. If any certificate contains only subject names of name forms other than those for which permitted subtrees are specified, the subject names are not required to fall within any of the specified subtrees. For example, assume that two permitted subtrees are specified, one for the DN name form and one for the rfc822 name form, no excluded subtrees are specified, but requiredNameForms is specified with the directoryName bit and rfc822Name bit present. A certificate that contained only names other than a directory name or rfc822 name would be unacceptable. If requiredNameForms were not specified, however, such a certificate would be acceptable. For example, assume that two permitted subtrees are specified, one for the DN name form and one for the rfc822 name form, no excluded subtrees are specified, and requiredNameForms is not present. A certificate that only contained a DN and where the DN is within the specified permitted subtree would be acceptable. A certificate that contained both a DN and an rfc822 name and where only one of them is within its specified permitted subtree would be unacceptable. A certificate that contained only names other than a DN or rfc822 name would also be acceptable. If excludedSubtrees is present, any certificate issued by the subject CA or subsequent CAs in the certification path that has a subject name (in the subject field or subjectAltNames extension) within these subtrees is unacceptable. For example, assume that two excluded subtrees are specified, one for the DN name form and one for the rfc822 name form. A certificate that only contained a DN and where the DN is within the specified excluded subtree would be unacceptable. A certificate that contained both a DN and an rfc822 name and where at least one of them is within its specified excluded subtree would be unacceptable. When a certificate subject has multiple names of the same name form (including, in the case of the directoryName name form, the name in the subject field of the certificate if non- null), then all such names shall be tested for consistency with a name constraint of that name form. If requiredNameForms is present, all subsequent certificates in the certification path must include a subject name of at least one of the required name forms. Of the name forms available through the GeneralName type, only those name forms that have a well-defined hierarchical structure may be used in the permittedSubtrees and excludedSubtrees fields. The directoryName name form satisfies this requirement; when using this name form a naming subtree corresponds to a DIT subtree. The minimum field specifies the upper bound of the area within the subtree. All names whose final name component is above the level specified are not contained within the area. A value of minimum equal to zero (the default) corresponds to the base, i.e. the top node of the subtree. For example, if minimum is set to one, then the naming subtree excludes the base node but includes subordinate nodes. The maximum field specifies the lower bound of the area within the subtree. All names whose last component is below the level specified are not contained within the area. A value of maximum of zero corresponds to the base, i.e. the top of the subtree. An absent maximum component indicates that no lower limit should be imposed on the area within the subtree. For example, if maximum is set to one, then the naming subtree excludes all nodes except the subtree base and its immediate subordinates. This extension may, at the option of the certificate issuer, be either critical or non-critical. It is recommended that it be flagged critical, otherwise a certificate user may not check that subsequent certificates in a certification path are located in the name space intended by the issuing CA. Conformant implementations are not required to recognize all possible name forms. If the extension is present and is flagged critical, a certificate-using implementation must recognize and process all name forms for which there is both a subtree specification (permitted or excluded) in the extension and a corresponding value in the subject field or subjectAltNames extension of any subsequent certificate in the certification path. If an unrecognized name form appears in both a subtree specification and a subsequent certificate, that certificate shall be handled as if an unrecognized critical extension was encountered. If any subject name in the certificate falls within an excluded subtree, the certificate is unacceptable. If a subtree is specified for a name form that is not contained in any subsequent certificate, that subtree can be ignored. If the requiredNameForms component specifies only unrecognized name forms, that certificate shall be handled as if an unrecognized critical extension was encountered. Otherwise, at least one of the recognized name forms must appear in all subsequent certificates in the path. If the extension is present and is flagged non-critical and a certificate-using implementation does not recognize a name form used in any base component, then that subtree specification may be ignored. If the extension is flagged non-critical and any of the name forms specified in the requiredNameForms component are not recognized by the certificate-using implementation, then the certificate shall be treated as if the requiredNameForms component was absent. NIST Cryptographic Standards Status Report April 4, 2006 Bill Burr Manager, Security Technology Group NIST william.burr@nist.gov Crypto Standards Toolkit Standardized, best of breed solutions for — Encryption • algorithms • modes of operation — Message authentication — Digital signature — Hashing — Key generation • deterministic (pseudorandom) and nondeterministic (hardware) • key derivation — Key management • agreement • transport • wrapping — Random number generation Acronyms (some are new) DLC: Discrete Logarithm Cryptography — FFC: Finite Field Cryptography • Digital Signature Algorithm (DSA), Diffie-Hellman (DH) and MQV* — ECC: Elliptic Curve Cryptography • ECDSA, ECDH, and ECMQV* — Believed secure if it’s hard to find discrete logarithms in FF or EC spaces respectively IFC: Integer Factorization Cryptography — RSA is only algorithm in this category we use • Reversible: can use for encryption or digital signatures — Believed secure if its hard to factor big numbers * MQV: Menenzes, Qu and Vanstone - efficient secure authenticated key agreement protocol that uses DLC Cryptographic Standards Security Requirements for Cryptographic Modules FIPS 140-2 Symmetric Key Public Key Secure Hash Random * Dig. Sig. Std. FIPS 186-2& * SHA-1, SHA- Number * DES (FIPS 46-3) FIPS 186-3 224, SHA-256. * TDES (SP-800-67) - DSA – bigger keys SHA-384, SHA- Generation * AES (FIPS 197) - RSA (X9.31 and PKCS #1) * SP 800-90 512 (FIPS 180-2) * Block Cipher - ECDSA (X9.62) (X9-82) Modes * Key Establishment Schemes - SP 800-38A, B, C - SP 800-56A (DH & MQV; FFC * HMAC (FIPS 198) & ECC Schemes; X9.42 and X9.63) - SP 800-56B (IFC Schemes; FFC: Finite Field Crypt. i.e., DSA, DH, MQV X9.44) IFC: Integer Factorization Crypt., i.e., RSA * Key Management Guideline ECC: Elliptic Curve Cryptography, i.e., - General Guidance ECDSA, ECDH, ECMQV - Key Management Organization - Application-Specific Guidance Theoretical Comparable Strengths Size in bits Sym. Key 80 112 128 192 256 Hash functions 160 224 256 384 512 (for signatures) * FFC and IFC 1k 2k 3k 7.5k 15k ECC 160 224 256 384 512 •Note: Approx. strength of hash functions used in HMAC, rand. no. gen. or key derivation is hash size itself Sym. Key: Symmetric key encryption algorithms FFC and IFC: Finite field discrete log and factoring based public key algorithms ECC: Elliptic Curve discrete log based public key algorithms White background: expected to be secure until at least 2030 Yellow background: Phase out use by 2010 NIST Crypto Standards Status 56 80 112 128 192 256 Sym. Key FIPS FIPS SP 800-67 FIPS 197 (AES) 46-3 185 Modes SP 800-38A, B, C, D, E Hashing FIPS 180-2 MAC FIPS 198 (HMAC) & SP 800-38B (CMAC) FFC & IFC 186-2/3 FIPS 186-3 Sigs. ECC Sig. 186-2/3 FIPS 186-3 Key Mgmt. SP 800-57 Key Schemes SP 800-56A (DH & MQV) & SP 800-56B (RSA) RNGs SP 800-90 (X9.82) Black Text: FIPS approved or NIST Recommended Hashed background: no plans for this strength Blue italic text: Public Review begun Black background: Withdrawn Red Italic text: Under development Recent Events: Random Number Generation SP 800-90: Deterministic Random Bit Generators — Draft for public review Dec. 2005 • Hash, HMAC, block cipher and number theoretic (Elliptic Curve) based generators ANSI X9.82: Consists of four parts — Part 1: Overview and Basic Principles — Part 2: Entropy Sources — Part 3: Deterministic (pseudo-random) Random Bit Generators — Part 4: Random Bit Generator Constructions Workshop held Summer 2004 Recent Events FIPS 186-3 Digital Signature Standard began Public Review — Extend DSA to include 2048-bit & 3072-bit keys — ECDSA & RSA also updated — RNG: Points to SP 800-90 — Assurance: Points to SP 800-89 (also posted for comment) — Public Review ends June 12th — http://csrc.nist.gov/publications/drafts.html NIST SP 800-56A: Recommendation for Par-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography — Posted: March 2006 — http://csrc.nist.gov/publications/nistpubs/ — Covers FFC and ECC Diffie-Hellman and MQV schemes The Future – Near Term Post SP 800-38D, GCM for comments Start issuing Pub. Key certificates with at least 2k FFC or 224 bit ECC keys and SHA-256 or SHA-224 by 2008 Stop using 80-bit equivalent crypto by 2010 — Don’t rely on 2key TDEA, SHA-1 (for signatures), 160-bit ECDSA, 1024-bit RSA, 1024-bit DSA, 1024-bit DH & MQV key agreement after Dec 31, 2010 Hash Standard workshops and competition — Response to cryptanalysis of SHA-1 — Define requirements in workshops – next one Aug. 24-25 in Santa Barbara — Competition for new Hash Function standard to supplement or supplant SHA-2 hash functions Future for Public Key Crypto NIST expects to allow continued use of finite field public key cryptography for the foreseeable future — Need 2048-bit keys after 2010 NIST encourages movement to Elliptic Curve methods for 128-bit equivalent public key crypto — May never see wide use of 3k FFC & IFC PK algorithms • ECC patents should be a minor issue long before we need 128-bit equivalent public key crypto in most unclassified applications • With bigger keys, ECC is much more efficient NIST encourages adoption of MQV key agreement protocol — Many good properties — Specified in SP 800-56A Full MQV Key Agreement Scheme One way to view Full MQV is as ephemeral-ephemeral Diffie- Hellman with static keys (contained in PKI certificates) included — Get nice properties of e-e DH (forward secrecy) with authentication for about 25% more computation IDA, ePubA, Cert(sPub)A A IDB, ePubB, MACK(ms1, IDB, IDA, ePubB, ePubA), Cert(sPub)B MACK(ms2, IDA, IDB, ePubA, ePubB) B The shared secret is computed from the identifiers of A and B (IDA & IDB), the ephemeral key pairs (ePubA, ePrivA, ePubB & ePrivB), and the static key pairs (sPubA, sPrivA, sPubB & sPrivB). ms1 & ms2 are distinct message strings. K is an authentication key derived from the shared secret. Static public keys or certificates may be obtained out of band. SP 800-56: MQV Key Agreement Scheme Most good security properties with the fewest messages and public key operations of any key agreement scheme — Various combinations of static and ephemeral keys — 1, 2 & 3 pass protocols — MQV primitives for FFC and ECC — Nice properties • Implicit key authentication • Explicit key authentication • Forward secrecy • Key compromise impersonation resilience • Unknown key-share resilience — Certicom patents on MQV • IETF proposals from Certicom for TLS and IPSEC — No security proof in the Canetti-Krawczyk model Hash Functions – a Hot Topic Hash functions take a variable-length message and reduce it to a shorter fixed message digest Many applications: “Swiss army knives” of cryptography: — Digital signatures (with public key algorithms) — Random number generation — Key update and derivation — One way function — Message authentication codes (with a secret key) — Integrity protection — code recognition (lists of the hashes of known good programs or malware) — User authentication (with a secret key) — Commitment schemes Recent Cryptanalysis changing our understanding of hash functions — Prof. Wang’s analysis of MD5, SHA-0 and SHA-1 & others Merkle-Damgard Hash Functions Take a long message, break it into blocks (typ. 512 bits) — M1, M2, M3…Mk (pad out last block) Let F be a “compression function” that operates on a block and the current h-bit state and “mixes” the block into the state Last output of compression function is the h-bit message digest. M1 …… Mk h-bit h-bit h-bit fixed IV F … F chaining value message digest Hash Function Properties Preimage resistant — Given only a message digest, can’t find any message (or preimage) that generates that digest. Roughly speaking, the hash function must be one-way. Second preimage resistant — Given one message, can’t find another message that has the same message digest. An attack that finds a second message with the same message digest is a second pre-image attack. • It would be easy to forge new digital signatures from old signatures if the hash function used weren’t second preimage resistant Collision resistant — Can’t find any two different messages with the same message digest • Collision resistance implies second preimage resistance • Collisions, if we could find them, would give signatories a way to repudiate their signatures Halloween Hash Bash Held Oct. 31-Nov 1 2005 at NIST Recommendations: — Getting rid of MD5 is highest priority • NIST never recommended MD5, but it is widely used — OK to continue using SHA-1 a few more years in old apps (really have to) but new apps must use something else (SHA-2) • But we don’t want apps to roll their own crypto SHA-2 support doesn’t arrive from Microsoft until Vista  Long tail to XP • Can’t issue only SHA-2 certificates until clients can do SHA- 2 Hash Bash on SHA-2 A family of algorithms, but only SHA-256 is usually discussed Very little analysis yet - rather complex May be theoretical break within a decade Probably won’t be a practical attack within a decade Not very efficient in hardware Can fix problems with more rounds — Need to be more conservative with number of rounds generally (think block cipher) NIST recommends for relatively near term Hash Bash: General Observations Merkle-Damgard hash as random oracle => trouble ? Algorithm agility is needed — Resilience: several hash functions But: algorithm agility “sucks” in hardware So: we should overbuild But: everybody pays all the time for that Hash Bash: The Future Still uncertain about exactly what we want Beyond Merkle-Damgard: block “generic attacks” Maybe we need more specialized functions — MACs, Digital Signatures, PRFs, KDF? Better design — Higher hamming weights — Better compression functions Provable security? — Number theoretic or equivalent to breaking something? Improve protocols to rely less on hash function properties NIST Policy on SHA-1 and SHA-2 Federal Users may use SHA-2 family hash functions (SHA-224, SHA-256, SHA-384, & SHA-512) for all hash function applications. For digital signatures, commitment schemes, time-stamping and other apps that require collision resistance, Federal users: — Should convert to SHA-2 as soon as practical, but — Must stop using SHA-1 for these apps by end of 2010 Federal users are encouraged to use SHA-2 for all new applications; however, they may continue to use SHA-1 after 2010 for: — HMAC — Key derivation — Random number generation Longer Term: Hash Standard Strategy For reasonably long term, not a crash program — Still discussing requirements/criteria — Hash functions not as mature as block cipher design in late 90s Flesh out requirements & criteria — Additional workshop(s) — First one after Crypto2006, August 24-25, 2006 in Santa Barbara Competition — Probably 2 stages, as with AES Selection — How many? — How do we improve significantly on SHA-2? NSA Suite B Previously, NIST’s open crypto algorithms used to protect sensitive unclassified data could not be used to protect classified data. That is no longer the case: NIST and NSA have been working to offer a standardized, public set of algorithms that can be used to protect both unclassified and classified information. The result is Suite B, an NSA selected subset of the NIST toolkit for classified applications up through Top Secret — http://www.nsa.gov/ia/industry/crypto_suite_b.cfm?MenuID=10.2.7 Specific NSA approval is still required for the implementations and systems that are used to protect classified information — Expect more guidance from NSA on acceptable key management • Should be consistent with SP 800-57 Suite B FIPS 140 Cryptographic Module Validation required for unclassified applications NSA will evaluate products used for classified applications — Commercial COMSEC Evaluation Program (CCEP) and User Partnership Agreements (UPA) • Not only evaluate a vendor's product, but also provide extensive design guidance on how to make a product suitable for protecting classified information • Use of Suite B algorithms is only one step in a larger process CNSSP #15 Committee on National Security Systems Policy No. 15 128-bit AES can be used for up to SECRET 192 & 256 bit AES can be used for up to TOP SECRET — Only AES-256 is used in Suite B http://www.cnss.gov/Assets/pdf/cnssp_15_fs.pdf Suite B – the algorithms Encryption Algorithm AES (FIPS 197) — AES-128 up to SECRET — AES-256 up to TOP SECRET Digital Signature (FIPS 186-3) — ECDSA with 256-bit prime modulus up to SECRET — ECDSA with 384-bit prime modulus up to TOP SECRET Key Agreement (NIST SP 800-56A) — EC Diffie-Hellman or EC MQV with 256-bit prime mod. up to SECRET — EC Diffie-Hellman or EC MQV with 384-bit prime modulus up to TOP SECRET Hash Functions (FIPS 180-2) — SHA-256 up to SECRET — SHA-384 up to TOP SECRET Encryption Algorithms Unclassified use Suite B Through After Secret Top 2010 2010 Secret AES 128    192   256     TDES 2key TDES  3key TDES   Hash Algorithms (for dig. signatures) Unclassified use Suite B Through After Secret Top 2010 2010 Secret SHA-1  SHA-224   SHA-256    SHA-384     SHA-512   Digital Signature Unclassified use Suite B Through After Secret Top 2010 2010 Secret FFC or IFC (DSA or RSA) 1024  2048   3072   ECC 160  224   256   * * Prime Modulus 384   * * curves only 512   Key Establishment Unclassified Use Suite B Through After Secret Top 2010 2010 Secret FFC (Diffie-Hellman or MQV)or IFC (RSA) 1024  2048   3072   ECC (Diffie-Hellman or MQV) 160  224   256   * * Prime Modulus 384   * * curves only 512   Why AES 256 with ECC 384 in Suite B? Theoretically — AES 256 is equivalent to ECC 512 — AES 192 is equivalent to ECC 384 By CNSSP # 15 192 bit AES is enough for Top Secret — AES 192 not included in Suite B AES 256 with ECC 384 seems a mismatch — But there is very little performance penalty for AES 256 • About a 20% difference • A lot of people are choosing to use AES 256 — There is a significant performance cost going to ECC 512 and ECC 384 is strong enough for Top Secret — Make life simple: use ECC 384, which is fast and strong enough, with AES 256 which is strong and fast enough. Suite B: Bottom Line Some folks need to do both classified and unclassified applications National security apps. need to use ordinary commercial software No fundamental difference between algorithms for SBU & classified NIST & NSA cooperation: cryptography for both SBU and classified NSA approval of implementations required for classified — Expect NSA-managed keying material for classified apps. Unclassified users must have CMVP validated crypto modules — More choices of algorithms including the ones in Suite B — Users typically generate their own keys Nobody looses; some of us gain NIST Links NIST Computer Security Resources Center — http://csrc.nist.gov/ NIST Crypto toolkit — http://csrc.nist.gov/CryptoToolkit/ FIPS 201/PIV page — http://csrc.nist.gov/piv-project/index.html FIPS page — http://csrc.nist.gov/publications/fips/index.html NIST Security Special Publications — http://csrc.nist.gov/publications/fips/index.html Questions ? NIST Information Security Responsibilities NIST has been charged under a series of Laws with the responsibility for issuing guidance and standards for security — Federal Information Security Management Act of 2002 (FISMA) Federal Information Processing Standards (FIPS) & NIST guidance apply only to the protection of unclassified, sensitive information by Federal agencies — But they are widely adopted by others NIST Cryptographic Standards “Toolkit” The Data Encryption Standard, FIPS 46, approved in 1978 began the modern era of open cryptographic standards US Federal government users must use NIST standards and guidance to protect unclassified, sensitive data — Nobody else is required (by US law) to use them — FIPS 140 Cryptographic Module Validation Program (CMVP) Crypto FIPS & recommendations are often adopted by others — SHA-1, AES, DSS & DES became widely used ANSI & ISO standards No US Federal regulation of cryptography by the private sector — Limited commercial crypto export controls & no crypto import controls — Some laws/regulations may effectively require business crypto use NIST Crypto Toolkit Philosophy Best of breed standardized algorithms — Intended to be secure against analytic attacks Small but comprehensive set of algorithms and methods — Promote interoperability — It’s hard & expensive to analyze crypto and be sure it’s secure — Industry doesn’t want to have to support too many algorithms Transparent Process – AES selection is a model — Published standards, nothing is secret — Do our best to explain our choices — Invite the whole world to review and comment; work with international cryptographic community Do not rule out patented methods but must be freely licensed — Patented crypto is very unpopular Symmetric Key Block Cipher Encrypt & Decrypt with the Plaintext Block same key Fast workhorse Encryption — Used for most message and Algorithm file encryption Used in a variety of “modes of Key Ciphertext Block operation” — different security and other Decryption properties Algorithm Plaintext Block Modes of Operation Recommendations SP 800-38 A – Modes of operation for encryption - update of FIPS 81 • ECB – Electronic Code Book • CBC – Cipher Block Chaining • CFB – Cipher Feedback • OFB – Output Feedback • Counter (not in FIPS 81) SP 800-38 B: CMAC Mode for Authentication SP 800-38 C: Counter with CBC MAC mode (CCM) — Used by 802.11i for wireless LANs SP 800-38 D: Galois Counter Mode (GCM) – Working Draft DP 800-38 E: AES Key Wrap – waiting for active runway CBC Mode P1 P2 Pn ENCRYPT IV K E K E K E C1 C2 Cn C1 C2 Cn DECRYPT K D K D K D IV P1 P2 Pn Counter Mode (a stream cipher mode) CTR1 CTR2 ••• CTRn ENCRYPT K E K E K E Keystream P1 P2 Pn C1 C2 Cn CTR11 CTR2 CTRn DECRYPT K E K E K E C1 C2 Cn P1 P2 ••• Pn CCM Mode Overview Encrypted Header Payload MIC Authenticated Designed for IEEE 802.11 wireless LANs Use CBC-MAC to compute a MIC (Message Integrity Code) on the plaintext header, length of the header, and the payload Use CTR mode to encrypt the payload — Counter values 1, 2, 3, … Use CTR mode to encrypt the MIC — anywhere else we’d call it a MAC rather than a MIC — Counter value 0 Finding Hash Collisions Find two messages with the same digest Birthday “paradox” — Given a population of x equally probable values, we need roughly x random samples to expect to find a single collision Therefore any attack on a hash with an n-bit message digest that finds a collision in much under 2n/2 operations is said to “break” the collision resistance property of the hash function Collision Resistance: a strong property — A hash function that is collision resistant must necessarily be second preimage resistant Finding Preimages Work backward from message digest to find a message that will produce it Expect to have to hash about 2n messages to find an unknown pre-image for any particular selected message digest — Any attack that finds a preimage in significantly under 2n operations is a break of the one-way property or preimage resistance of a hash function If we can find second preimages, we can forge a new digital signature from an old signature SHA-1 Collisions Current best estimate ~ 262 to 263 operations to find a collision — Attack due to Prof. Xiaoyun Wang — Should be 280 — 262 is still a fair amount of work • How much farther will it go? — Would be nice to verify this result • May be dangerous to do so How important are collisions? Two extreme views: — Relatively minor, only matter for rare instances where we have to prove to a 3rd party (e.g. certain PKI apps), or; — Canary in the mineshaft, crack in the dyke – a warning of much bigger dangers possibly close at hand Trust Infrastructure and DNSSEC Deployment Allison Mankin mankin@psg.com 5th Annual PKI R&D Workshop 2006 Why DNSSEC • Good security is multi-layered and preventive – Multiple defense barriers in physical world – Multiple ‘layers’ in the networking world • DNS infrastructure – Providing DNSSEC extensions to raise the barrier for DNS based attacks – Provides a security barrier or an enhancement for systems and applications The Problem • DNS data is too readily changed, removed or replaced between the “server” and the “client”. • This can happen in multiple places in the DNS architecture – Some places are more vulnerable than others – Vulnerabilities in DNS software make attacks easier (and software will never stop being at risk) Solution a Metaphor • Compare DNSSEC to a sealed transparent envelope. • The seal is applied by whoever closes the envelope • Anybody can read the message • The seal is applied to the envelope, not to the message • This Metaphor is the Brilliant Work of Olaf Kolkman Secure DNS Query and Response (simple case) Root Server myhost.example.com Local Server com Server End-user myhost.example.com = 192.0.2.1 Plus signature for myhost.example.com example.com Server Attacker can not forge this answer without the associated private keys. How Does DNSSEC Extend DNS? • DNSSEC adds four new record types: – DNSKEY - carries public key – RRSIG - carries signature of DNS information – DS - carries a signed hash of key – NSEC - signs gaps to assure non- existence • Working on one more, NSEC3 – This would provide privacy enhancement DNS-Vectored Attacks in Current Events: BlackBerry Router • From RIM (January, updated 29 Mar): Under normal circumstances, this [a way that the BlackBerry Router can be shut down using a flaws in the routing protocol] should be viewed as an internal-only vulnerability because the BlackBerry Router will only communicate with the BlackBerry Infrastructure. An external user attempting to exploit this needs to manipulate Domain Name System (DNS) queries. This results in a denial of service and does not require any further action to interrupt connectivity to external services. Enterprises can mitigate the risk of DNS hijacking by creating static entries in their local DNS or HOSTS tables for the BlackBerry Infrastructure. • Pointers and info on several DNS attacks from 2005 at http://www.dnssec-deployment.org/epi.htm Status of DNSSEC • Production: major server implementations of the protocols – RFCs 4033, 4034, 4035 • Not ready: some OS (Microsoft); embedded-type systems (e.g. firewalls); applications-awareness • Still in development: an extension to prevent zone- walking, an important concern for a small but key set of sites • Incremental deployment of what we’ve got currently is like setting tripwires - this is good because all past experience suggests the tripwires are needed State of the art deployment: RIPE • Signed reverse tree zones (in-addr.arpa, ip6.arpa) for protection of this infrastructure • Because .arpa and root not yet signed, developed careful web and secure-mail mechanism for announcing, distributing and rolling-over the public key signing key for their zones • https://www.ripe.net/projects/disi/keys/ State of the art: SE • .SE was first to turn on production DNSSEC and first to receive delegations • A characteristic of their operation is their transparency of security planning – Deliberations on key length, smart card for the private keys, CA software for managing the delegations, all documented on the site • http://dnssec.nic.se/ Other environments • Internet2 and U.S. universities including Berkeley, Penn, MIT are in DNSSEC efforts – Campuses have many targets – DNS organizations are very active, provide many trusted secondaries root • Status here is complex • Regular DNSSEC workshop at ICANN has minimal ties to IANA • The DNS technical community consensus is that incremental, large deployment is the answer and root deployment can come later, as a “pull” Trust Infrastructure: SSHFP • RFC 4255 allows ssh fingerprints to be published in the DNS – SSHFP Resource Record (RR) – A replaced or modified DNS response destroys ssh host verification, so this mechanism mandates use of DNSSEC authentication – A different take: DNSSEC extensions allow DNS to vector the trust infrastructure • More of this: RFC 4025, IPSECKEY – IPSECKEY RR – DNSSEC allows opportunistic key exchange Trust Infrastructure: DKIM • Domain Keys Identified Mail stores and retrieves a public key for signing of email in the DNS – The signature goal varies by use but attests a domain and often also an identity “on behalf of whom” – Given this, it is obvious that the protection of the DKIM usage in DNS is needed DKIM in a Vulnerable DNS Server Mar 2005 style ISP server attack Query brisbane._dkim.example.com Valid reply with Y poisoned additional ISP Server Attackers information. False Origin Endpoint .com server address installed in ISP servers - 10% of servers vulnerable Hypothetical attack: a new signature is added X by X, whose public key resides at a false domain Y. A commercially successful DNS attack last year used the same vulnerabilities and topology. Observations and conclusions • There are cost tradeoffs to deploying DNSSEC – Good studies of the computing and network costs from NLNET Labs and NIST (low-moderate, probably even taking into account size of SHA-256) – Training and operation, key management • Besides thinking of costs, consider risk-benefit – We need metrics for exploits caught by current deployments – Are there alternatives to DNSSEC for protecting DKIM? – How costly is the exploitation that occurs if we don’t have this protection? NIST PKI’06: Integrating PKI and Kerberos Jeffrey Altman The Slow Convergence of PKI and Kerberos  At Connectathon 1995 Dan Nessett of Sun Microsystems was quoted saying “Kerberos will gradually move toward public-key” in reference to the publication of Internet Draft • draft-ietf-cat-kerberos-pk-init-00  IETF CAT Working Group (Apr 1995) discussed not only pk- init-00 but also Netscape’s proposal for something called SSL.  Eleven years and 34 drafts later PK-INIT has been approved as an IETF Draft Standard. How much more gradually can we move?  A Three Slide Overview of Kerberos V5 Before PKI: Single Realm  The Authentication Service (AS) Exchange • The client obtains an "initial" ticket from the Kerberos authentication server (AS), typically a Ticket Granting Ticket (TGT). • The AS-REQ may optionally contain pre- authentication data to prove the client’s identity. • The AS-REP, containing an authenticator (aka ticket), is encrypted in the client’s long term key.  The Ticket Granting Service (TGS) Exchange • The client subsequently uses the TGT to authenticate and request a service ticket for a particular service, from the Kerberos ticket- granting server (TGS).  The Client/Server Authentication Protocol (AP) Exchange • The client then makes a request with an AP-REQ message, consisting of a service ticket and an authenticator that certifies the client's possession of the ticket session key. The server may optionally reply with an AP-REP message. AP exchanges typically negotiate session specific symmetric keys. Slide 2: Kerberos 5 Cross Realm Tickets Obtained krbtgt/FOO.KERB@FOO.KERB krbtgt/BAR.KERB@FOO.KERB Srv/Host@BAR.KERB Cross Realm works when realm FOO.KERB shares a key with realm BAR.KERB. In all cases, the KDC must share a key with the application Service. Slide 3: Kerberos 5 Delegation  Delegation utilizes the ability to FORWARD tickets from a client machine to a service.  The service can then assume the identity of the client in order to authenticate to a subsequent service.  Constraints can be applied to the forwarded tickets using authorization data. PKI and Kerberos have each excelled in separate spheres PKI and the Web Kerberos and Enterprise  Smartcards for logon Services  Web Service authentication  Console Logon  Remote Console Logon  TLS authenticated services  File System Access • FTP, SMTP, IMAP, many • AFS, NFS, CIFS, FTP more …  E-mail Service Access  Signatures and Privacy (S/MIME)  Print Services • E-mail  Real-time authenticated messaging • Instant Messages • Zephyr But combining PKI and Kerberos is necessary for true Single Sign-On  Multifactor Initial Authentication  Mutual Client Server authentication  With Delegation  Through Proxies  Supporting all protocols It’s a big task but we can do it!!! How the PKI and Kerberos worlds can be joined  Imagine a world in which each Kerberos Key Distribution Center is also a Certificate Authority. • Its not hard to do, think Microsoft Active Directory.  PK-INIT* • Kerberos Initial Ticket Acquisition using Public Key • Certificates or Raw Key Pairs  PK-CROSS • Establishment of Kerberos Cross Realm relationships using Public Key • Mutual Authentication of KDCs • Secure Generation of Static Keys  PK-APP (aka KX509)* • Acquisition of Public Key certificates using Kerberos *implementations are currently available PK-INIT: How does it work?  PK-INIT is implemented as a Kerberos Pre-authentication mechanism  If the client’s request adheres to KDC policy and can be validated by its trusted CAs, then the reply is encrypted either with • A key generated by a DH key exchange and signed using the KDC’s signature key, or • A symmetric encryption key, signed using the KDC’s signature key, and then encrypted with the client’s public key.  Any required keying material is returned to the client as part of the AS-REP’s PA-PK data.  If the client can validate the KDC’s signature, obtain the encryption key, and decrypt the reply, then it has successfully obtained an Initial Ticket Granting Ticket. PK-INIT: Not Vaporware  Draft -9 deployed by Microsoft in Windows 2000 and above  The Proposed Standard (Draft -34) is being deployed today: • Microsoft Vista • Heimdal Kerberos  Future deployments: • MIT Kerberos and the operating systems that distribute it PK-INIT: Opening the doors to alternative enrollment models  Trusted CA issued certificate  A single smart card can be can be enrolled with multiple enrolled with multiple realms realms allowing the acquisition of  Raw public key pairs can be TGTs for multiple service used instead of certs allowing providers SSH style enrollments PK-CROSS: Easing the administrative challenges to key exchange  Kerberos Cross Realm succeeds in Active Directory Forests because the key establishment is automated  Kerberos Cross Realm works for the major Universities and Government labs because they have taken the time to manually establish keys  For the rest of us, an automated key establishment protocol is required. Public key crypto could reduce the administrative burden to the configuration of policy. KX.509 (or How to authenticate using a Kerberos identity to a PKI service)  KX509 utilizes a Kerberos Application Service authentication to communicate with a special certificate service that issues client certificates with the same identity and valid lifetime as the Kerberos Service ticket.  The resulting certificate is placed in the certificate store for use by applications such as web browsers. What’s Next for Kerberos and PKI Integration?  Standardize PK-CROSS and PK-APP  Strive for Zero Configuration  Standardize the use of SAML decoration of PKI Certificates and Kerberos Tickets  Standardize a firewall friendly method of communicating with Kerberos KDCs  Improve the user experience • Focus deployment efforts in order to reduce the number of credentials end users are responsible for References  KX509 http://www.kx509.org  IETF Kerberos Working Group http://www.ietf.org/html.charters/krb-wg-charter.html  Heimdal PKINIT http://people.su.se/~lha/patches/heimdal/pkinit/  Microsoft Windows 2000 PKINIT http://support.microsoft.com/kb/248753/en-us Q&A Requirements for Federated Single Sign-On  Trusted initial authentication • Smartcards, Zero Knowledge Inference, Biometrics, One Time Pads. • May require different methods depending on the environment  Mutual Authentication between each set of endpoints  Delegation of credentials with constraints • Forwardable Kerberos tickets • Authorization Data (MS PAC, SAML) provide constraints  Ability to present a recognizable credential to each service • Certificates or Tickets  Federated acceptance of presented credentials Enabling Revocation for Billions of Consumers Kelvin Yiu kelviny@microsoft.com Microsoft Corporation Agenda • Why X.509 Revocation is Difficult • Lessons Learned • Enabling Revocation – The Hard Questions • X.509 Revocation in Windows Vista • Best Practices The Consumer Grandma Understands This Right? • Hmmmm? • Despite popular legislation, you cannot legislate comprehension by end users • What do all of these fields mean to me? • certifcatePolicies are for lawyers, not consumers or end users Why is Revocation So Difficult? Multitude of Application Scenarios & Requirements • Client scenarios – SSL server authentication (Internet Explorer) – Smart card logon – Outlook S/MIME – Code signature verification (Authenticode) • Install time vs load time – Wireless, RAS • Server scenarios – Smart card logon (DC) – IIS SSL client authentication – Radius Why is Revocation So Difficult? Multitude of Locations and Connectivity Options Business Partner Main Office Wireless Network LAN LAN Internet •A certificate may be validated Branch Office anywhere using any connectivity option: Remote •LAN User •VPN LAN •RPC over HTTP •Extranet •Private network •No connectivity Why is Revocation So Difficult? Peak Bandwidth = $$$ Source: VeriSign (RSA 2005) • Usage mostly due to code signing CRLs (90%+) • Wide variance in bandwidth use – Highest use is Monday morning – High fixed cost to handle peak bandwidth • Client side retry logic means service degenerate quickly • OCSP generally uses less bandwidth than CRLs, but not always Lessons Learned Enabling Revocation in Internet Explorer • First tried enabling SSL revocation in IE 3.02 – SSL sometimes grinds to a halt – IE 3.02 didn’t ship with revocation enabled • Threat - is the risk worth the pain? – $50 credit card liability – No real protection from phishing scams – Will users be bothered to report key compromise? • What is tolerable for the average consumer? Lessons Learned Outlook 2000 S/MIME Deployments • Users complained Outlook often hangs when revocation checking is enabled – Lesson learned: 90s per URL timeout is too long. Will do 15s but let the retrieval finish in the background – Lesson 2 learned: 15s is still too long, but shorter timeout increases % retrieval failure • What were the causes? – Outlook blocks until signature validation completes • Outlook 2003 performs validation on background thread – Operational errors (offline server, CRL not published) – Multiple URLs in the CDP (Internet vs Intranet) Lessons Learned Enabling Revocation for Authenticode • Enabled revocation checking for ActiveX download as a critical security update – Had to make revocation error non-fatal to present regression • Caused problems for scenarios that validate signature at load time – Developers did not understand network implication of calling verify signature API – Some anti-virus products performs self integrity checks periodically – Machines in private network cannot download CRL Lessons Learned Misbehaving Proxies • Unreliable caching semantics in HTTP 1.0 – “expires” header assumes synchronous clocks – Windows sets “Pragma: no-cache” to avoid retrieving stale CRLs • Auto-proxy does not always return active proxies – Clients would fail randomly because a random proxy is selected from the list • Incorrect proxy configuration (wininet.dll vs winhttp.dll) • Proxy access policy – Not all users have Internet access – Users but not machines have access Enabling Revocation by Default The Hard Questions • Is the benefit worth the infrastructure and user costs? • Should online revocation be required for all applications? – OS boot and signature validation makes this challenging – What is the expect behavior when working offline? • What is the expected behavior for mobile users? – How does a laptop in a hotel room contact the intranet (LDAP) URL for CRLs? Should VPN be required? – When is failure an acceptable option? • Will users tolerate reduced performance and reliability? • What is the reasonable level of assurance for consumers? Enabling Revocation by Default What Problem does Revocation Really Solve? • Revocation is an attempt at a perfect solution in an imperfect world – Imperfect CA identity validation procedures – Key compromise • How often are key compromise reported to the CA? – Can take days or weeks for info to propagate • HTTPS protects users from untrustworthy networks – WiFi hotspots, neighbor – Pharming attacks • Works well when protecting users from key/certificates that were compromised in the past Our Goals for Windows Vista Enabling Revocation for Billions of Consumers • “It just works” – Good defaults but not optimized for all scenarios – Can be fine tuned with custom policy • Balance between threat mitigation and user experience • Minimize peak bandwidth usage for network operators and CAs • Enterprise managed tolerance on revocation freshness – Network connectivity issues, infrastructure failures necessitate the need for “emergency mode” to ignore all offline and stale revocation errors • IE7 on Windows Vista revocation enabled by default! Revocation in Windows Vista Taking Revocation to the Next Level • OCSP client – Supports the light weight OCSP profile • TLS “Stapling” extensions – IE7 on Windows Vista and IIS7 • HTTP 1.1 caching proxies • Randomized pre-fetch to take advantage of overlapping validity periods in OCSP or CRL • Flush CRLs and OCSPs from memory caches via certutil.exe • OCSP responder in “Longhorn” Server Revocation in Windows Vista How TLS “Stapling” Scales Contoso Public Certification Authority Internet •Grandma connects to https://www.contoso.com •Contoso pre-fetches the OCSP response for its certificate Grandma Revocation in Windows Vista How TLS “Stapling” Scales Contoso Public Certification Authority Internet •Contoso returns its certificate chain and the OCSP response in the TLS handshake •Stapling reduces load on the CA to # of servers, not clients Grandma Revocation in Windows Vista CRL vs OCSP • Windows will always prefer cached objects or a “stapled” OCSP response • If network retrieval is required, then OCSP is preferred if both AIA and CDP are present – Try all OCSP URLs, then CDP URLs • Windows will switch to CRLs if: – The number of OCSP responds retrieved for an issuer exceeds 50 (configurable in the registry) – Configured by group policy • Network timeout is still 15 seconds per URL Revocation in Windows Vista How Pre-Fetch Works • In the background, client selects a random time between next expected publication time and expiration – Expected publication time computed from fetch time + max-age Revocation in Windows Vista Why Pre-Fetch is Valuable • TLS “Stapling” does not return CRLs for intermediate CA certificates • Works with both OCSP and CRL • Supports LDAP URLs too with nextPublishTime • Useful on server scenarios too – Pre-fetches CRLs on domain controllers for smart card logon • Pre-fetched URLs that are not used during the next cycle will be removed from pre-fetch list Revocation in Windows Vista HTTP 1.1 proxy support • Reduces load on the CA to # of proxies, not clients • Caches HTTP GETs, can be configured to cache dynamic content, HTTP POSTs but not LDAP • “ETag” allows “conditional” GETS – allows clients and proxies to query the origin server for freshness without downloading object • “Max-age” specifies the length of time proxies can return cached object on its own – Helps enable pre-fetch functionality in proxies • Retrieval of stale object will force all proxies to revalidate with origin server Revocation in Windows Vista HTTP 1.1 proxy support Revocation Service A Internet B HTTP 1.1 C Caching Proxy … 1.A requests CRL on 2/1/2005, 8:00am 2.Revocation services sends the following headers in the HTTP response: HTTP/1.1 200 OK Content-Length: 1653 Date: Sun, 01 Feb 2005 08:00:00GMT Content-Type: application/pkix-crl Last-Modified: Sun, 01 Feb 2005 00:00:00 GMT ETag: "39a0-28d-4029bce7” Expires: Sat, 07 Feb 2005 23:59:59 GMT Cache-Control: Max-age = 86400 Revocation in Windows Vista HTTP 1.1 proxy support Revocation Service A Internet B HTTP 1.1 C Caching Proxy … 3. HTTP Proxy caches CRL and returns it to A 4. B requests the same CRL an hour later. Since the proxy cached the CRL for less than 1 day, the proxy can return its cached copy to B without revalidating with the revocation service Revocation in Windows Vista HTTP 1.1 proxy support Revocation Service A Internet B GET http://... HTTP 1.1 C If-None-Match: "39a0-28d-4029bce7" Caching Proxy … 5. C requests the same CRL 2 days later. Since it is more than 1 day since the proxy validated with the revocation service, it sends a conditional GET to the service 6.Revocation service returns only updated headers to proxy since the CRL was not updated HTTP/1.1 304 Not Modified Date: Tue, 03 Feb 2005 9:00:00GMT ETag: "39a0-28d-4029bce7“ Cache-Control: Max-age = 86400 Revocation Best Practices Industry Call to Action • Use HTTP, not LDAP – Set Etag, and cache-control: max-age • Keep it simple - 1 OCSP URL and 1 CDP URL accessible everywhere • Use overlapping validity period • max-age should be less than overlap period – Can be shorter for long lived CRLs • Support the light weight OCSP profile for high volume environments – Pre-generate OCSP response if security requirements permits – Don’t use nonce since it is not cachable • Ensure new browser / server supports stapling • Push for stapling in updated protocols Questions / Comments? • Experiment with Windows Vista Beta 2 • Feedback always welcomed – kelviny@microsoft.com Background Slides Other PKI Enhancements in Vista • Path validations improvements – Reject certs with unrecognized critical extensions – Fixed a number of issues around Qualified Subordination • Self-issued certificates • inhibitAnyPolicy extension • Apply name constraints to all certificates below constraining certificate (not just end entity) – Cross-Certificate discovery using Subject Information Access extension • ECC and SHA2 support Other PKI Enhancements in Vista • Improved diagnostics support – PKI applications are hard to troubleshoot • Not enough information • Too many moving parts – Network or proxy problem? – Bad information in certificate? – Application vs platform problem? – Extensive diagnostic information about path validation failures • Information in structured in XML designed for automated post-processing and troubleshooting • Integrated with new Windows Event Viewer – No changes needed for legacy applications Navigating Revocation through Eternal Loops and Land Mines Santosh Chokhani (chokhani@orionsec.com) Slide 1 Outline of Presentation • Prior Research • Motivation • Notations • Circularities due to Self-Issued Certificates • Circularity in Indirect CRL • Circularities in OCSP Responder • CRL and OCSP Responder Certification Paths • Summary Slide 2 Prior Research • Examined several research papers and projects – Some based on one reviewer’s comment • Findings – None of them deal with the issues we are dealing with – Issues we are dealing with are concrete and deterministic (some of the research deals with heuristics or reasoning under uncertainty) – Issues we are dealing with relate to gaps in the standards that can cause security problems Slide 3 Motivation • Find gaps in the PKI related Internet and X.509 standards that can cause security problems • Identify solutions that can mitigate (preferably fully) the security flaws Slide 4 Notations • Use (name, key) 2-tuple for issuer and subject – An entity can have multiple keys • Examples of Notation – Certificate (B, B-1)R, R-1 • Certificate issued to Subject DN “B” with Subject Public Key “B- 1”. Certificate signed by Issuer DN “R” using private key companion to Issuer Public Key “R-1” – CRLB, B-1 • CRL signed by Issuer DN “B” using private key companion to Issuer Public Key “B-1” – OCSPO, O-1 • OCSP response signed by Responder with DN “O” using private key companion to Responder Public Key “O-1” • Motivation – Complete (covers both name and key) – Provides for easy chaining of name and signature (as required by X.509 and Internet Standards) • Certificate (B, B-1)R, R-1, Certificate (C, C-1)B, B-1 Slide 5 Why Do PKIs Use Self-Issued Certificates • To Maintain Trust Paths When CA re-keys • To Have Separate Certificate and CRL Signing Keys – Enhances Operational Security • Certificate signing could require two-person control at all times • CRL signing can be automated operation Slide 6 Self-Issued Scenario: CA Re-Keys Problem Signature verification on CRLB, B-2 requires trusting Certificate (B, B-2)B, B-1. But, to trust Certificate (B, B-2)B, B-1 CRLB, B-2 is needed. Solution Alternatives • Obtain a new certificate from parent CA • Sign CRL using all “valid” keys • Use “No-check” extension • Relax CRL checking requirements Slide 7 Solution: Obtain a New Certificate from Parent • In other words, eliminate self-issued certificate. • Parent may not be available when a CA re-keys (minor drawback) Slide 8 Solution: Sign CRL Using All Active Keys Benefits • Commercial products work well with this approach • Simplest and most secure binding between certificate and CRL signer Need to keep all active keys (minor drawback) Slide 9 Solution: Use No-Check Extension • What to do when Certificate (B, B-2)B, B-1 is compromised • Is the approach standard compliant? (not in strict sense) • Do products support this? (not likely) Slide 10 Solution: Relax CRL Checking Requirement • In other words, “no check”, without asserting no-check • Issues – What to do when Certificate (B, B-2)B, B-1 is compromised • CA B can request revocation of Certificate (B, B-1)R, R-1 – Is the approach standard complaint • Not in strict sense – Do commercial products support this • Probably not Slide 11 What if the CA is a Trust Anchor • Issue old with new and new with old – notAfter date in Certificate (B, B-1)B, B-2 = latest notAfter in certificates signed using private key companion to B-1 • Secure from cryptanalysis viewpoint – notAfter date in Certificate (B, B-2)B, B-1  latest notAfter date in certificates signed using private key companion to B-1 • Secure from cryptanalysis viewpoint • Assumes that subscriber will obtain new root when they get new certificate • Other considerations same, except – No parent to obtain a certificate from – Signing CRL with all keys the best alternative Slide 12 Self-Issued Scenario: CA Uses Different Key for CRL Signing Problem Signature verification on CRLB, B-2 requires trusting Certificate (B, B-2)B, B-1. But, to trust Certificate (B, B-2)B, B-1 CRLB, B-2 is needed. Solution Alternatives • Obtain a new certificate from parent CA • Sign CRL using all “valid” keys • Use “No-check” extension • Relax CRL checking requirements Slide 13 Solution: Obtain Certificate from Parent • In other words, eliminate self-issued certificate. • Parent may not be available when a CA re-keys (minor drawback) Slide 14 Solution: Use No-Check Extension • What to do when Certificate (B, B-2)B, B-1 is compromised • Is the approach standard compliant? (not in strict sense) • Do products support this? (not likely) Slide 15 Solution: Relax CRL Checking Requirement • In other words, “no check”, without asserting no-check • Issues – What to do when Certificate (B, B-2)B, B-1 is compromised • CA B can request revocation of Certificate (B, B-1)R, R-1 – Is the approach standard complaint • Not in strict sense – Do commercial products support this • Probably not Slide 16 What if the CA is a Trust Anchor • Instead of getting certificate from parent, use two trust anchors (one to verify certification paths and one to verify root issued CRL) – Constraint on the CRL signing trust anchor may not be technically enforceable, but the Root can be operationally trusted to not issue certificates using CRL signing key • Other considerations remain the same, except – Revocation requires out-of-band means to notify relying parties to delete the trust anchor Slide 17 Circularity in Other Revocation Mechanisms • Indirect CRL – Some differences from above scenarios – See paper for details • OCSP – Circularity due to Responder providing its own status • Solution – OCSP client check – OCSP Responder certificate (does not point to itself as its own OCSP Responder) – Circularity in trust path Slide 18 Circularity in OCSP Responder Trust Path Solution: Responder should not provide status of CA certificates in the Responder Certification Path This does not mean that a Responder can not provide status of CA certificates in a certification path. For example, if each of the CA issued a certificate to the Responder, then the issuing CA’s subordinate CA status can be securely provided by the Responder Circularity is not a concern for the two major OCSP clients since they require the CA-delegated model or trust anchor model, both of which eliminate circularity Slide 19 CRL Certification Path Problem Problem: Two CAs with name “A” can confuse the relying party in checking a certificate on CRL issued by wrong “A” Commercial products requiring same key to sign certificate and CRL do not have this problem The problem can be real in Bridge – Bridge environment where name constraints are not enforced on shared service providers In Bridge – Bridge environment, the problem is not fixed by the new requirement of terminating the certification paths at the same trust anchor Solution: Name matching at each layer in certification path; also helps with computational complexity Slide 20 OCSP Certification Path Problem • Problem akin to CRL certification path • Not as acute – Major OCSP client vendor ensure security through trust model • Responder is either a trust anchor or issued a certificate signed by the same CA and same key as the certificate in question Slide 21 Summary: Self Issued Certificates • Can lead to circularity • Not checking revocation status of self-issued certificates is not the answer • There are standards-compliant alternatives to remove circularity – Selection of the alternative may depend on your PKI environment Slide 22 Summary: CRL Certification Path • Standards do not provide guidance on CRL certification paths • This lack of guidance could lead to insecure results in Bridge — Bridge cross certified environment when name constraints may not always be used – Problem only surfaces when CA names collide • Solution is to do name matching at each layer of certification path – Reduces computational complexity for certification path development while enhancing security • Commercial products that require the same key to sign certificate and CRL do not have the security problem Slide 23 Summary: OCSP Responder Certification Path • Standards do not provide guidance on OCSP Responder certification path • This lack of guidance could lead to insecure results in Bridge — Bridge cross certified environment when name constraints may not always be used • A solution was developed that can reduce the computational complexity for certification path development while enhancing security • Popular commercial products do not have the security problem – They require the same key to sign certificate in question and OCSP Responder certificate; or – They require OCSP Responder to be a trust anchor • Trust anchor solution may not be scalable in cross certified and Bridge environments unless Responders obtain the responses from each other and re-sign the responses Slide 24 Questions Slide 25 Simplifying Public Key Credential Management Through Online Certificate Authorities and PAM Stephen Chan Matthew Andrews Abstract The secure management of X509 certificates in heterogeneous computing environments has proven to be problematic for users and administrators working with Grid deployments. We present an architecture based on short lived X509 credentials issued by a MyProxy server functioning as an Online Certificate Authority, on the basis of initial user authentication via PAM (Pluggable Authentication Modules). The use of PAM on the MyProxy server allows credential security to be tied to external authentication mechanisms such One Time Password (OTP) systems, conventional LDAP directories, or federated authentication services such as Eduroam. Furthermore, by also leveraging PAM at the authenticating client, X509 certificates are transparently issued as part of the normal system login process. When combined with OTP authentication, both OTP and PKI become more manageable and secure. When combined with federated authentication services such as Eduroam, large, distributed user populations can have instant access to X509 credentials that provide transparent single sign-on across virtual communities that span sites, countries and continents. . mechanisms, low quality (or even null) Motivations passphrases are often chosen by users. 2. Users are not always aware of the necessary The usability and security issues of X509 filesystem permission settings on private keys to certificates have been a concern for users and maintain security. administrators of Grid computing for the past several 3. Credentials may be stored on shared network years. Beckles, Welch and Basney[1] summarized the filesystems that are vulnerable to sniffing or observations made in the community, as well as authentication compromise (as well as exposure directions for future development. Whitten and due to inadequate permissions settings). Tygar[2] described the broad security issues with PKI 4. Certificate revocation is not uniformly deployed and the usability issues of another PKI tool, PGP. We by certificate authorities, nor is it uniformly believe that many of the usability issues identified by checked by relying parties. Whitten and Tygar also apply to openssl, the tool 5. If a user’s passphrase is lost or forgotten, the generally used to manipulate X509 certificates as part only recourse is revocation and re-issuance of the of Grid certificate management practices. In fact, certificate. Whitten and Tygar evaluate a graphical user interface 6. The “barn door” property: it is futile to lock the to PGP, which is arguably simpler for end users than barn door after the horse is gone. Once a secret a complex and overloaded command-line interface has been left unprotected, even for a short time, such as openssl. there is no way to be sure that it has not already Summarizing the usability and security issues been read by an attacker – given the problems from these two papers we have the following: with securing private keys listed above, it is hard to be confident of the integrity of a certificate. 1. Users are sometimes unaware of, or unmotivated The problem is made worse by the long lifetimes by, the necessity for strong passphrases to secure (typically 1 year) of a certificate and the their private keys, and there are no difficulty of ensuring that revocations are administrative controls to enforce passphrase effective. quality. 7. Users need to have copies of their certificate and It is widely observed that in the absence of private key at every location where they will use strong password/passphrase enforcement the certificate for authentication. This magnifies the key management issues already described. 8. Tools for manipulating PKI credentials (such as considered an acceptable risk to store the proxy PGP and openssl) have usability issues. certificate credentials unencrypted, but protected with Acquiring a Grid credential sometimes requires secure file permissions. With an unencrypted proxy, either generating a keypair and certificate the user no longer needs to enter a passphrase to signing request with an openssl based tool, or decrypt the private key at each authentication. else exporting the certificate and key from a Assuming the relying party trusts the certificate browser, and using openssl to translate the authority that signed the user’s certificate, the certificate into a different encoding scheme[3]. certificate chain from the proxy to the CA can be Changing passphrases on private key generally used to authenticate the user. requires use of openssl. Proxy certificates vastly simplify the authentication process, allowing Grid users to have In addition, keylogging has become more single sign-on across physically and administratively common in exploits and malware - until such time as distributed systems. Systems in different secure virtual machines that are somehow keylogger- administrative domains can decide independently if proof[4] are deployed, the security of any secret they will accept an individual certificate, and map the protected by a static password/passphrase is in certificate into a local account. This provides for question. single sign-on across a collection of loosely coupled In response to the proliferation of keyloggers, systems. One Time Passwords (OTP) have been evaluated[5] Normally users need a copy of their personal and deployed at many sites. One Time Passwords certificate credentials at every location where they bring their own usability issues: may want to generate a proxy – for users with many accounts across many machines, this often means 9. Sites typically have their own OTP systems, and copying the credentials to each working account on cross vendor, cross realm compatibility is often the different machines. This creates security and lacking logistical issues because all credential copies must be Consequently, users may be forced to have an managed properly: file permissions, passphrases and individual OTP token per site where they have revocation/renewal must be applied to each an account. certificate at each location. As the problem gets 10. Asking users to authenticate with a different larger, the temptation to take shortcuts and the password every time they log into the same likelihood of errors inevitably becomes greater. system may prove onerous, especially in The MyProxy service addresses these issues environments where Single Sign-On by allowing the user to store a set of longer lived authentication (Kerberos, Globus GSI, etc…) is proxy credentials on a central server. After the norm. authenticating to the MyProxy service, a client can 11. OTP mechanisms are not compatible with batch then locally generate a new key-pair, and request that job schedulers, or many unattended distributed the stored proxy credentials sign a short-lived proxy systems platforms. certificate for those local credentials. In this way, users can generate a signed proxy from any location We have worked to address the usability and that has network access to the MyProxy server, security issues around X509 certificates and One without needing to manage multiple copies of their Time Passwords in our design, however the solution personal certificate credentials. is not tied to One Time Passwords and is compatible In response to the threat posed by keystroke with many legacy and future authentication systems. loggers, a roadmap for integration of MyProxy with OTP was described by Basney, Welch and Siebenlist Deploying a MyProxy based in 2004[8]. Since then, development on MyProxy has progressed along the roadmap: Online Credential Authority • NCSA has added support for OTP using MyProxy[6] has been used as an online PAM[9] credential repository in the Grid Community for • Code from Monte Goode and Mary several years and has been undergoing constant Thompson of Lawrence Berkeley Lab was development. Historically, Grid Authentication has included in the MyProxy 3.0 release that been done with proxy certificates, which are short supported online Certificate Authority (CA) lived certificates signed either by the user’s end functionality[10]. The Online CA serves as a entity certificate or by another proxy[7]. Because certificate authority that returns a signed proxies are short lived, the consequences of short lived end entity certificate to the client compromise are limited in time. Therefore, it is instead of a short lived proxy certificate. So long as the relying parties trust the certificate used by the MyProxy online CA Our efforts at NERSC/LBL have been to work to sign the certificate request, this certificate with Goode and Thompson to specify and test the is valid for Grid authentication, or any other online CA functionality, and to integrate the X509-based authentication. By using an MyProxy online CA into existing and future online CA with short lived certificates, we authentication systems (PAM, OTP and Kerberos). avoid the key management problems of We have developed PAM modules that make the having large numbers of long lived process of acquiring certificates from MyProxy and certificates that need to be managed by mapping them to Kerberos credentials transparent to either the end user, or the MyProxy end users. administrators. Figure 1: Logical Diagram of NERSC OTP/MyProxy environment require Kerberos, we will release a PAM module that Figure 1 is a logical diagram of the environment implements only the MyProxy credential being developed and tested at NERSC. It implements functionality. The components of the environment the roadmap described by Basney, Welch and are: Siebenlist as well as introducing a PAM module on • MyProxy 3+ - configured as an Online the client that transparently acquires a short lived Certificate Authority and using a RADIUS credential from the MyProxy service and uses it to PAM module to contact a Radius router acquire a Kerberos credential. For sites that do not • Radius Router (FreeRadius) – configured protocol which allows the user to prove his with a module that queries a local OTP identity using x509 credentials rather than service over an SSL connection. The Radius the traditional Kerberos shared server is capable of supporting a Radius secret(password). Authentication Fabric[11] such as 4. The system’s krb5.conf specifies the use of Eduroam[12] for authentication federations. an openssl engine module called • Kerberos – our environment uses Heimdal myproxy_engine to acquire the x509 Kerberos because it has the most mature credentials. support for pkinit, allowing X509 5. The myproxy_engine module prompts the certificates to be used to acquire Kerberos user for his password using a prompter credentials. function which has been passed by reference • PAM – We are using a set of patches by all the way down the call stack from the Doug Engert to the standard Kerberos5 original PAM aware application(in this case PAM module[13]. In the current design login.) pkinit calls an openssl engine module to 6. The myproxy_engine module generates a transparently (from the user’s point of view) public/private keypair, and a certificate acquire a certificate from the MyProxy request. server. Future work will include a 7. The certificate request is then sent to the standalone PAM module that acquires a myproxy server along with the users certificate from MyProxy without any username, and password as part of a connection to Kerberos. myproxy protocol get request. The myproxy • One Time Password Server – we use an protocol uses the SSL/TLS protocol both to OTP service developed within the verify the authenticity of the myproxy Department of Energy that supports server,(you don’t want to send a valid authentication tokens from CryptocardTM. password to the wrong server) and to ensure This particular OTP server can be replaced the privacy of the exchange. with a different OTP service, or with a static 8. Upon receiving the get command, the authentication system such as LDAP. An myproxy server uses the pam libraries on open source FreeRadius module that it’s system to attempt to authenticate the supports Ansi X9.9 authentication user. tokens[14] is also available. 9. The pam libraries on the myproxy system pass the authentication request on to a The system described here is in development and pam_radius module which uses the testing at NERSC/LBL. The MyProxy, Radius, RADIUS protocol to a locally trusted Kerberos and OTP components are in limited RADIUS server. This RADIUS server may deployment to staff members. The pkinit/myproxy verify the validity of the password locally, integration is in testing, which will provide seamless or forward the request on to a federated integration of One Time Passwords, X509 certificates system such as Eduroam. and Kerberos. 10. If the RADIUS server confirms the validity of the user’s password, the myproxy server then creates a short lived certificate for that The Login Process user, and signs it using locally accessible In order to demonstrate how this system works in CA credentials(possible stored on a smart practice we will walk through the steps involved in card or similar crypto system.) authenticating a user who is attempting to log into a 11. The myproxy server now returns the new workstation that uses this system for its certificate as part of the success reply to the authentication service: get command, and the myproxy_engine 1. The Workstation’s login program uses the module returns the certificate and keypair to system’s PAM library to request the krb5 library, and stores them in a local authentication of the user. file for use by the user if the login succeeds. 2. The system’s PAM library passes on the 12. At this point the krb5 library uses the authentication request to a pam_krb5 certificate to perform a krb5 authentication module. exchange using the pkinit protocol 3. The pam_krb5 module has been configured extension. to attempt to authenticate the user via the 13. When the krb5 Key Distribution pkinit extension to the krb5 authentication Center(KDC) receives the authentication request, it checks that there is a valid additional password entry for a limited certificate chain linking the certificate used amount of time. in the request to a CA trusted by the KDC. If the request passes this check, then the KDC checks a local file which provides a Evaluating the Design mapping of x509 DNs to Kerberos 5 principal names to determine if the entity We feel that the most important aspects of this described in the cert maps to the principal approach are: specified in the authentication request. If • Simplifying the process of acquiring and this check succeeds, then the KDC sends a managing X509 certificates for end user by success reply along with a Kerberos ticket using PAM modules and short lived back to the krb5 library on the workstation. certificates 14. The krb5 library finally returns successfully • Potential integration with Federated to the pam_krb5 module which stores the authentication systems such as Eduroam. Kerberos ticket in a new credential cache, • The use of One Time Passwords to avoid the and returns success to the system PAM dangers posed by keyloggers library, which in turn returns success to the login program. The following table shows the issues identified 15. The user is allowed to log into the earlier and how they are addressed. In some cases the workstation, and has access to his Kerberos, issue is totally resolved, in others it mitigates, but and x509 credentials which can then be used does not solve the problem. to access additional services without Usability/Security Issue Response Users are sometimes unaware of, or unmotivated Passwords are in backend authentication system. by, the necessity for strong passphrases. Centralized password strength checking at backend. Users are not always aware of the necessary PAM module handles short term certificates and filesystem permission settings on private keys to keys on behalf of user. Long term certificates maintain security eliminated, avoiding those private keys entirely. Credentials may be stored on shared network PAM module handles certificates – can be filesystems that are vulnerable to sniffing or administratively configured to store creds in authentication compromise filesystem, memory, kernel keyring, HSM, etc. Certificate revocation is not uniformly deployed by Short lived (hours to days) certificates mitigate certificate authorities, nor is it uniformly checked by revocation issues. Configurable CA interface allows relying parties attributes such as OCSP URL to be added to certs. If a user’s passphrase is lost or forgotten, the only Passphrase/password is in external authentication recourse is revocation and reissuance of the service (via PAM) and can be changed as certificate. appropriate. The “barn door” property: it is futile to lock the Mitigated by short certificate lifetimes and the barn door after the horse is gone. Once a secret has potential to embed OCSP URL attribute in been left unprotected, there is no way to be sure certificate, enabling realtime revocation, without that it has not already been read by an attacker proving onerous to user. Users need to have copies of their certificate and MyProxy credential store is originally designed to private key at every location where they will use the mitigate this problem. Proposed solution builds on certificate for authentication. existing benefits. Tools for manipulating PKI credentials (such as Use of PAM module merges certificate acquisition PGP and openssl) have usability issues. and management into normal login process. No longer necessary for user to be exposed to openssl command line. Sites typically have their own OTP systems, and Support for RADIUS fabric allows cross platform, cross vendor, cross realm compatibility is often cross site OTP authentication. lacking Asking users to authenticate with a different Certificate (or Kerberos ticket) provides persistent password every time they log into the same system authentication token. may prove onerous in environments where Single Sign-On authentication (Kerberos, Globus GSI, etc…) is already in place. OTP systems are not compatible with batch job See above. schedulers, or many distributed systems platforms two efforts into a production service that uses OTP tokens to acquire Keberos credentials, and KCA to translate the Kerberos credentials into x509 One of the benefits of this design is that it is certificate[17]. fully backward compatible with existing systems that A technical evaluation of the current either use Kerberos tickets or Grid authentication: the Kerberos and OTP authentication scheme revealed changes only effect how a certificate and/or a that the Kerberos server needed to have privileged Kerberos ticket are acquired. The caveat is that X509 access to an OTP server, to encrypt the Kerberos relying parties must include the MyProxy Online ticket with the one time password. This would not be CA’s certificate in their collection of trusted an acceptable design for a federated authentication certificates. scheme, where a Kerberos server would need The system also allows any site to issue privileged access to a remote OTP service to X509 certificates based on existing authenticate a user with a remote site’s token. username/password based authentication schemes: so We investigated approaches that used long as their system has a PAM interface, it can be Radius to authenticate against remote authentication plugged into the MyProxy server for user services, and then encrypt the Kerberos ticket using authentication. In an era where passwords and the password. Because the password is the encryption passphrases are vulnerable to keystroke logging, and key for the Kerberos ticket, additional layers of malware installed by hackers and vendors alike, the encryption and security would be needed to ensure value of centrally managed access to certificates that the password not be exposed to sniffing and should not be underestimated. decryption. This is especially relevant given the Because this approach only effects the initial known shortcomings of Radius crypto[18]. In a acquisition of the certificate and Kerberos ticket, MyProxy based approach, the private key is locally there is no performance penalty on any of the generated by the MyProxy client, and it never goes subsequent authentication using these credentials. over the network. The MyProxy transaction is SSL The lifetime of the credentials determines how often encrypted, so the password has reasonable encryption new ones have to be acquired – typically sites will – if the PAM module on the MyProxy server is have a lifetime of between 1 or 2 working days. On configured to use hashes instead of cleartext our local systems, it takes a total of under 1.5 secs for passwords for authentication, the user’s password the entire process of authenticating against an OTP need never go over the network in the clear. Along service, acquiring a X509 certificate and using pkinit with the fact that the private key does not travel over to acquire a Kerberos credential. This is a small the network, this approach is significantly more fraction of the amount of time it takes a user to look secure when federated authentication is desired. up and type in a one time password. We believe that There are also commercial solutions that much of the 1.5 secs is due to latencies introduced by integrate Kerberos and One Time Passwords. In our communicating with multiple services over the investigations, we found no evidence that these off network, and not due to computational overhead. the shelf solutions would be interoperable among the Because of the infrequent need to acquire different OTP vendors. We were also concerned new credentials and the brief time it takes to perform about being locked into a single vendor’s solution the task, we do not believe that performance is an and not having access to source code, as well the cost issue with this approach. Additional instances of the for initial deployment and ongoing license fees. Our server would be desirable to support redundancy, not approach uses open source and/or standards for performance. compliant tools where ever possible. In addition, this design is vendor neutral with regards to OTP – so long as an OTP service supports RADIUS, it can Comparison to Similar Designs operate in the framework. The integration of Kerberos and X509 certificates has been successfully developed and released as part of the kx509 and KCA projects at Lessons Learned University of Michigan[15]. OTP and Kerberos The use of the openssl engine interface to integration has been described by Hornstein, et get x509 certs from myproxy was chosen so that al[16]. FermiLab has successfully integrated these existing krb5 applications such as kinit would be able to work without modification, however this approach tying OTP into a single sign-on system, and has proven to have several problems. providing a route for federating authentication • The engine API provides no standard way to domains over Radius, we simultaneously address the pass a username into the engine so the Kerberos usability issues of OTP at a single site, as well as libraries needed to be modified to pass this via a OTP across multiple sites. We believe that this generic engine control interface. approach has the potential to scale across sites, • If authentication fails later in the authentication nations and continents – Eduroam is one of the first process, there is no mechanism to go back and examples of a Radius authentication fabric. At the clean up the x509 creds stored in the local time of writing, Eduroam spans 20 nations[19] and filesystem. there is interest in expanding further. For this reason it is our intent to move to a Because our approach is vendor and system which uses a series of PAM modules, one of platform agnostic, open source, standards compliant which performs the myproxy authentication, and and does not require tight administrative or technical another which performs the krb5 pkinit coupling, we feel that it is a good technical starting authentication using the x509 creds acquired, and point for developing scalable, usable and secure stored by the first. authentication infrastructures. Despite the potential for scalability, it is also reasonably easy for a small site to deploy such a system for internal use, and interface it into their legacy authentication scheme. Future Work We have confidence in this overall approach In an earlier section, we described the goal because it builds on the collective experience and of developing decoupled PAM modules for MyProxy collaborative efforts of the DOE Grids and Globus authentication (without also acquiring Kerberos communities. Our design is one example of a new tickets). We also feel it would also be desirable to generation of PKI tools for Grid computing which is add attributes to the X509 certificates and the starting to appear, that builds on the experience of the Kerberos tickets that designate them as having been past several years. This work builds on and has been acquired with a One Time Password. This allows deeply dependent on the efforts of Monte Goode, relying parties to enforce policies related to password Mary Thompson, Jim Basney, Von Welch, Mike strength. Helm, Eli Dart, Steve Lau, William Kramer, Buddy In addition to concerns about password Bland, Scott Studham, Remy Evard, Tom Barron, strength, relying parties may also want to real-time Dane Skow, Craig Goranson, Gene Rackow, Tony revocation information about credentials. OCSP is Genovese, Dhiva Muruganantham, Suzanne one approach which supports this functionality. Willoughby, Anne Hutton, Howard Walter, Frank Additional attributes in the MyProxy signed certs that Siebenlist, Ken Hornstein, Doug Engert, Love point to an OCSP responder is therefore another goal Hörnquist Åstrand and the many others who have for future work. worked on pkinit. Conclusion References The experience of the Grid community with [1] Beckles, B., Welch, V., Basney, J., “Mechanisms deploying PKI has made clear the usability and for increasing the usability of grid security”, security issues around managing certificates. One International Journal of Human Computer Studies, approach to simplifying the management of July 2005, vol 63, pg 74-79 certificates is to entirely eliminate long term [2] Whitten, A., Tygar, D., “Why Johnny Can’t certificates, and use tools like PAM to embed short Encrypt: A Usability Evaluation of PGP 5.0”, term certificates within the existing authentication Proceedings of 8th USENIX Security Symposium, processes. This is the overall approach we have taken August 1999, pg 169-183 and we believe that the improvements in usability [3] “How to request certificates from the DOEGrids and security are significant. While our approach is CA”, http://www.doegrids.org/pages/cert- Kerberos based, we intend to decouple the MyProxy request.html client code from pkinit, and release the source to a [4] Sinclair, S., Smith, S., “The TIPPI Point: Towards PAM module that uses myproxy directly to acquire a Trustworthy Interfaces”, IEEE Security and Privacy, certificate from the MyProxy server, without any July 2005, pg 71 Kerberos requirements. [5] Chan, S., Lau, S., Srinivasan, J., Wong, A., “One The other usability issue we have tried to Time Password for Open High Performance address is the adoption of One Time Passwords. By Computing Environments”, http://www.eduroam.org/docs/eduroam-eunis05- http://www.es.net/raf/OTP-final.pdf lf.pdf [6] Novotny, J., Tuecke, S., Welch, V., “An Online [13] Engert, D., “Use of PKINIT from PAM”, Credential Repository for the Grid: MyProxy”, Heimdal Discuss Mailing list Archives, April 28, Proceedings of the 10th IEEE International 2005, http://www.stacken.kth.se/lists/heimdal- Symposium on High Performance Distributed discuss/2005-04/msg00101.html Computing, 2001, pg 104-114 [14] Cusack, F., “Documentation for pam_x99_auth [7] and rlm_x99_token”, Google, 2002, http://www.globus.org/toolkit/docs/4.0/security/key- http://www.freeradius.org/radiusd/doc/rlm_x99_toke index.html n [8] Basney, J., Welch, V., Siebenlist, F., “A Roadmap [15] Doster, W., Watts, M., Hyde, D., “The KX.509 for Integration of Grid Security with One Time Protocol”, CITI Technical Reports Series, 2001, 01- Passwords”, May 2004, 02, http://www.nersc.gov/projects/otp/GridLogon.pdf http://www.citi.umich.edu/techreports/reports/citi-tr- [9] Basney, J., “Using the MyProxy Online 01-2.pdf Credential Repository”, presented at GlobusWorld [16] Hornstein, K., Renard, K., Newman, C., Zorn, 2005, G., “Integrating Single-use Authentication http://www.globusworld.org/2005Slides/Session%20 Mechanisms for Kerberos”, IETF Internet Drafts 4b(2).pdf pg 15 Kerberos Working Group, 2004, [10] “The MyProxy Certificate Authority” http://www1.ietf.org/proceedings_new/04nov/IDs/dra http://grid.ncsa.uiuc.edu/myproxy/ca/ ft-ietf-krb-wg-kerberos-sam-03.txt [11] Helm, M., Genovese, T., Morelli, R., [17] Private correspondences and discussions in Grid Muruganantham, D., Webster, J., Chan, S., Dart, E., PKI working groups Barron, T., Menor, E., Zindel, A., “The RADIUS [18] Hassell, J., “The Security of RADIUS”, Authentication Fabric: Solving the Authentication RADIUS, O’Reilly & Associates, 2002, pg 131-138 Delivery Problem”, 2005, http://www.es.net/raf/OTP- [19] Eduroam web site, http://www.eduroam.org/ final.pdf [12] Florio, L., Wierenga, K., “Eduroam: Providing mobility for roaming users”, Simplifying Public Key Credential Management Through Online Certificate Authorities and PAM Steve Chan and Matthew Andrews NERSC Division, LBNL Presented at PKI06 Workshop, NIST April 4-6, 2006 Original Motivations • Originally motivated by security threats from keystroke loggers and the desire for better Grid support • Desired system has the following properties: – Minimize the change to existing authentication mechanisms • Less user confusion • Does not disrupt current work practices – Provide OTP security without burden of typing in OTP constantly – Single signon using a technology that has been tested in production De-Motivators (or What’s Wrong with PKI and OTP?) PKI Related De-Motivations • Users are sometimes unaware of, or unmotivated by, the necessity for strong passphrases • Users are not always aware of the necessary filesystem permission settings on private keys to maintain security • Credentials may be stored on shared network filesystems that are vulnerable to sniffing or authentication compromise • Certificate revocation is not uniformly deployed by certificate authorities, nor is it uniformly checked by relying parties • If a user’s passphrase is lost or forgotten, the only recourse is revocation and reissuance of the certificate. • The “barn door” property: it is futile to lock the barn door after the horse is gone. Once a secret has been left unprotected, there is no way to be sure that it has not already been read by an attacker De-Motivators cont’d • Users need to have copies of their certificate and private key at every location where they will use the certificate for authentication. • Tools for manipulating PKI credentials (such as PGP and openssl) have usability issues. OTP Related De-Motivators • Sites typically have their own OTP systems, and cross vendor, cross realm compatibility is often lacking • Asking users to authenticate with a different password every time they log into the same system may prove onerous in environments where Single Sign-On authentication (Kerberos, Globus GSI, etc…) is already in place. • OTP systems are not compatible with batch job schedulers, or many distributed systems platforms Components of a Solution • MyProxy • Kerberos • PAM • One Time Passwords • Radius MyProxy • In use as online credential store – Originally stored and signed proxy certificates • Extended to – Store long term certificates – Authenticate with external authentication sources – Online certificate authority • Currently maintained at NCSA – Jim Basney is lead Kerberos • Heimdal Kerberos • Compatible with MIT Kerberos • Full source available • Support for pkinit deemed more mature and stable PAM • Pluggable Authentication Modules • Supported by most common Unix distributions (Linux, Solaris, etc…) • Modularizes authentication to support different authentication methods – Password files – LDAP – Kerberos – RADIUS – MyProxy One Time Passwords • Becomes prominent in order to defeat keystroke loggers • Can be supported by either – Hardware (SecurID, CryptoCard, etc…) – Software (OPIE) • Most hardware OTP tokens support RADIUS in some form • Sandia OTP based on Cryptocard libraries – Java based server that support replication – Module written for FreeRadius that uses Sandia client RADIUS • Common protocol used for authentication queries • FreeRADIUS is open source Radius server • (Relatively) easy to write modules to extend functionality – Module for Radius routing (for Radius fabric) – Module for OTP authentication to Sandia Server Integrated Solution Description System • MyProxy – Used as online certificate authority – Interfaced to OTP system via PAM (radius) • PAM – Module used on client machines to acquire X509 cert from MyProxy Server and then acquire Kerberos credential via pkinit – Module used on MyProxy server to authenticate to OTP service • Kerberos – Uses pkinit extensions to authenticate user via X509 certificate Description cont’d • One Time Passwords – Integrated with MyProxy via FreeRADIUS interface – Integrated with other sites via FreeRadius radius router • FreeRadius – Serves as “router” for OTP requests coming in over Radius – Module issues OTP requests to Sandia OTP server – Another module capable of routing/translating local usernames to remote usernames and route request appropriately Benefits of Design Usability/Security Issue Response Users are sometimes unaware of, or unmotivated by, Passwords are in backend authentication system. the necessity for strong passphrases. Centralized password strength checking at backend. Users are not always aware of the necessary filesystem PAM module handles short term certificates and keys permission settings on private keys to maintain on behalf of user. Long term certificates eliminated, security avoiding those private keys entirely. Credentials may be stored on shared network PAM module handles certificates – can be filesystems that are vulnerable to sniffing or administratively configured to store creds in authentication compromise filesystem, memory, kernel keyring, HSM, etc… Certificate revocation is not uniformly deployed by Short lived (hours to days) certificates mitigate certificate authorities, nor is it uniformly checked revocation issues. Configurable CA interface by relying parties allows attributes such as OCSP URL to be added to certs. If a user’s passphrase is lost or forgotten, the only Passphrase/password is in external authentication recourse is revocation and reissuance of the service (via PAM) and can be changed as certificate. appropriate. The “barn door” property: it is futile to lock the barn Mitigated by short certificate lifetimes and the potential door after the horse is gone. Once a secret has to embed OCSP URL attribute in certificate, been left unprotected, there is no way to be sure enabling realtime revocation, without proving that it has not already been read by an attacker onerous to user. Benefits cont’d Usability/Security Issue Response Users need to have copies of their certificate and MyProxy credential store is originally designed to private key at every location where they will mitigate this problem. Proposed solution use the certificate for authentication. builds on existing benefits. Tools for manipulating PKI credentials (such as Use of PAM module merges certificate acquisition PGP and openssl) have usability issues. and management into normal login process. No longer necessary for user to be exposed to openssl command line. Sites typically have their own OTP systems, and Support for RADIUS fabric allows cross platform, cross vendor, cross realm compatibility is cross site OTP authentication. often lacking Asking users to authenticate with a different Certificate (or Kerberos ticket) provides persistent password every time they log into the same authentication token. system may prove onerous in environments where Single Sign-On authentication (Kerberos, Globus GSI, etc…) is already in place. OTP systems are not compatible with batch job See above. schedulers, or many distributed systems platforms Lessons Learned • OpenSSL Engine interface – The engine API provides no standard way to pass a username into the engine so the Kerberos libraries needed to be modified to pass this via a generic engine control interface. – If authentication fails later in the authentication process, there is no mechanism to go back and clean up the x509 creds stored in the local filesystem. • Move to a stacked PAM module approach instead of everything embedded in a single OpenSSL Engine call Future Work • Decouple myproxy from OpenSSL code for separate PAM module • Expand OTP to work across multiple sites • Rollout into more widespread use Thanks to… • PKI06 Program Committee – Especially Frank Siebenlist who worked with us to improve the paper • Monte Goode, Mary Thompson, Jim Basney, Von Welch, Mike Helm, Eli Dart, Steve Lau, William Kramer, Buddy Bland, Scott Studham, Remy Evard, Tom Barron, Dane Skow, Craig Goranson, Gene Rackow, Tony Genovese, Dhiva Muruganantham, Suzanne Willoughby, Anne Hutton, Howard Walter, Frank Siebenlist, Ken Hornstein, Doug Engert, Love Hörnquist Åstrand Identity Federation and Attribute-based Authorization through the Globus Toolkit, Shibboleth, GridShib, and MyProxy Tom Barton1, Jim Basney2, Tim Freeman1, Tom Scavo2, Frank Siebenlist1,3, Von Welch2, Rachana Ananthakrishnan3, Bill Baker2, Monte Goode4, Kate Keahey1,3 1 University of Chicago 2 National Center for Supercomputing Applications, University of Illinois 3 Mathematics and Computer Science Division, Argonne National Laboratory 4 Lawrence Berkeley National Laboratory Abstract Laboratory has successfully operated an online Kerberos Certification Authority for a This paper describes the recent results of the number of years to allow its users to GridShib and MyProxy projects to integrate leverage existing Kerberos infrastructure for the public key infrastructure (PKI) deployed X.509 authentication [40]. for Grids with different site authentication mechanisms and the Shibboleth identity In parallel, Shibboleth [39] has been federation software. The goal is to enable developed by the Internet2 community and multi-domain PKIs to be built on existing is increasingly deployed both in the U.S. and site services in order to reduce the PKI abroad as a mechanism for cross-site access deployment and maintenance costs. An control for web-based resources. Shibboleth authorization framework in the Globus utilizes OASIS SAML standards [23,24,31] Toolkit is being developed to allow for for authentication and attribute assertion to credentials from these different sources to achieve its purpose. be merged and canonicalized for policy In this paper we cover recent work by two evaluation. Successes and lessons learned projects, GridShib [14,45] and MyProxy from these different projects are presented [1,27], working towards the integration of along with future plans. PKIs with both site authentication infrastructure and Shibboleth in order to 1 Introduction achieve large-scale multi-domain PKIs for The Grid [13] communities have developed access control.1 In section 2 we begin with a an international public key infrastructure brief review of the Globus Toolkit and (PKI) [20] as well as extensions to standard Shibboleth on which our work builds. In end entity certificates (EECs) in the form of section 3 we summarize our work and proxy certificates [42,44]. The combination lessons learned from the past year. We of this PKI and proxy certificates is used to conclude in section 4 with our plans for the provide cross-domain authentication, single upcoming year. sign-on, and delegation for a number of large deployments (e.g., [9,33,41]). As computational Grids have grown, there has been increasing interest in leveraging existing site authentication infrastructure to 1 We stress this infrastructure is for access control support this Grid authentication model. For and similar point-in-time decisions as opposed to example, Fermi National Accelerator long-term document signing for example. 2 Prior Work provide richer authorization policies exist as optional configurations. As is discussed In this section we provide a brief overview later, the GridShib project enhances the of the Globus Toolkit and Shibboleth on authorization options of the Globus Toolkit which our work builds. by adding standards-based attribute exchange for both authorization policies and 2.1 Globus Toolkit service customization. The Globus Toolkit [12] provides basic functionality for Grid computing with 2.2 Shibboleth services for data movement and job Shibboleth[39] provides cross-domain single submission, and a framework on which sign-on and attribute-based authorization higher-level services can be built. Over while preserving user privacy. Developed by recent years, the Grid has been adopting Internet2/MACE [21], Shibboleth is based in Web Services technologies, and this trend is large part on the OASIS Security Assertion reflected in recent versions of the Globus Markup Language (SAML). The SAML 1.1 Toolkit in implementing the Web Services browser profiles [19,23,36] define two Resource Framework [32] standards. This functional components, an Identity Provider convergence of Grid and Web Services was and a Service Provider2. The Identity part of our motivation for adopting Provider (IdP) creates, maintains, and Shibboleth, which is also leveraging Web manages user identity, while the Service Service technologies. Provider (SP) controls access to services and The Grid Security Infrastructure [46], on resources. An IdP produces and issues which the Globus Toolkit is based, uses SAML assertions to SPs upon request. An X.509 end entity certificates [18] and proxy SP consumes SAML assertions obtained certificates [44]. In brief, these certificates from IdPs for the purpose of making access allow a user to assert a globally unique control decisions. Shibboleth specifies an identifier (i.e., a distinguished name from optional third component, a “Where Are the X.509 identity certificate). We note that You From?” (WAYF) service to aid in the in Grid scenarios there is often an process of IdP discovery. organizational separation between the The Shibboleth specification [3] is a direct certificate authorities (CAs), which are the extension of the SAML 1.1 browser profiles authorities of identity (authentication) and [23]. While the SAML 1.1 browser profiles the authorities of attributes (authorization). begin with a request to the IdP, the For example, in the case of the Department Shibboleth browser profiles are SP-first and of Energy (DOE) SciDAC program [38], a therefore more complex [36]. single CA, the DOE Grids CA [7], serves a broad community of users, while the In addition to the browser profiles, attributes and rights for those users are Shibboleth specifies an Attribute Exchange determined by their individual projects (e.g., Profile [3]. On the IdP side, a Shibboleth National Fusion Grid, Earth Systems Grid, Attribute Authority (AA) produces and and Particle Physics Data Grid). issues attribute assertions, while a subcomponent of the SP called an Attribute Authorization in the Globus Toolkit is by default based on access control lists (ACLs) located at each resource. The ACLs specify 2 For the purposes of discussion, we adopt SAML 2.0 the identifiers of the users allowed to access terminology [17] throughout this paper, although our the resource. Also, higher-level services that work is currently based on SAML 1.1 technology. Requester consumes these assertions. Our revocation information in the form of work builds on Shibboleth attribute Certificate Revocation Lists (CRLs) [18] or exchange with a focus on authorization and online certificate status protocol (OCSP) access control in the Globus Toolkit. [28] responses. Users can run the MyProxy Logon application to obtain their complete The current implementation of the security context from the MyProxy service. specification is Shibboleth 1.3 (released July The MyProxy administrator maintains a set 2005), which has become our primary of trusted CA certificates and configures the development platform. We describe server to periodically fetch fresh CRLs. extensions and enhancements to the MyProxy Logon fetches the configured CA Shibboleth Identity Provider and Service certificates and CRLs in addition to the Provider components later in this paper. user’s end entity or proxy certificate and installs them in the local user’s environment. 3 Recent Results This work is inspired by Gutmann’s “Plug- In this section we provide a summary of our results from the past year. and-Play PKI” [15] which describes a PKI bootstrapping service aimed to make PKI 3.1 MyProxy enrollment as easy as adding a computer to the network with DHCP. Gutmann’s MyProxy began as an online credential PKIBoot service can use two methods to repository for X.509 proxy credentials bootstrap mutual trust between the un- encrypted by user-chosen passphrases [30]. initialized client and the certificate issuer. Users authenticate to the MyProxy service to The first method uses a shared secret (such obtain short-lived (per session) proxy as an enrollment password) to generate a credentials that are delegated from Message Authentication Code (MAC) for credentials stored in the repository. This each message. The second method is a gives users convenient access to proxy variant of the “baby-duck security model” credentials when and where needed, without where the client trusts the first issuer it finds requiring them to directly manage their for the one-time bootstrap operation. long-lived credentials. The latter remain protected in a secure repository, where the A drawback to the shared secret method is it repository administrator can monitor and becomes yet another password that users control credential access. must remember. Common site authentication methods, such as Unix In the past year, we have extended MyProxy passwords, One-Time Passwords, and to better integrate with existing site Kerberos, allow a service to verify a infrastructure and to make it easier for users password entered by the user, but don’t to bootstrap their X.509 security context. allow a service to lookup the user’s site New developments, described in the authentication password in advance for use following sections, include management of in a MAC or other secure password trust roots, standards-based integration with protocol. Thus existing site passwords site authentication, and the ability to act as a cannot be used and we must therefore have a Certificate Authority (CA). unique password for the bootstrap service. In environments where users must bootstrap 3.1.1 Managing Trust Roots their PKI context repeatedly as they use A user’s X.509 security context includes an different machines, it becomes necessary to end entity or proxy credential, one or more maintain a long-lived password or dedicated trusted CA certificates, and certificate one-time password stream using S/Key or PAM authentication is based on user equivalent. interaction, typically through one or more password prompts. In contrast, SASL The baby-duck method is well known to provides a flexible protocol framework for SSH users, who learn the public keys of supporting multiple authentication target hosts in the first connection attempt. mechanisms. The primary SASL mechanism This approach is generally accepted as used by MyProxy is GSSAPI, which allows “good enough” given the infrequency of users to authenticate with a Kerberos ticket connecting to a target host for the first time to obtain their X.509 credentials from and the infrequency of man-in-the-middle MyProxy. attacks in practice relative to keystroke loggers, Trojan horses, viruses, etc. 3.1.3 MyProxy Certificate MyProxy Logon currently supports two Authority approaches to this initial bootstrapping. The For users that don’t already have X.509 first is to use an existing SASL mechanism credentials to store in the MyProxy that supports mutual authentication, such as repository, the administrator can configure Kerberos, for the bootstrap operation, MyProxy to act as an online CA to issue leveraging existing site authentication certificates in real-time based on site infrastructure. The second is to distribute a authentication. The administrator must trust root for the MyProxy service with the provide a mapping of authenticated MyProxy client software distribution, usernames to certificate subjects, either in a recognizing that we trust this software configuration file or through LDAP. The distribution in any case not to capture user authenticates via MyProxy Logon to the passwords or otherwise misuse credentials. MyProxy service, and MyProxy issues a We have also prototyped the baby-duck certificate to the user with the subject approach and are considering it as a lighter- provided in the mapping file. weight alternative. By leveraging existing site authentication 3.1.2 Site Authentication infrastructure through PAM and SASL, the The MyProxy service can be configured to MyProxy CA provides a lightweight allow users to logon with existing site mechanism for sites to distribute X.509 credentials, using Pluggable Authentication credentials. Modules (PAM) and/or the Simple Authentication and Security Layer (SASL). 3.2 GridShib: X.509 and SAML Through these mechanisms, users are not Integration required to remember another username and GridShib is a software product that allows password for the MyProxy service. for interoperability between the Globus Unix/Linux vendors support many PAM Toolkit and Shibboleth. The complete modules, including Unix password, One- software package consists of two plug-ins: Time Password, Radius, Kerberos and one for the Globus Toolkit (GT) and another LDAP. We have successfully tested our for Shibboleth. With both plug-ins installed MyProxy PAM interface with Radius (and and configured, a GT Grid Service Provider One-Time Passwords), Kerberos and LDAP. may securely request user attributes from a PAM also supports access control and Shibboleth Identity Provider. In this section, monitoring modules to implement standard we briefly describe both software plug-ins security policies across multiple services. and then describe the profile by which they The primary difference is the use of X.500 operate in greater depth. distinguished names (DNs) to identify principals. 3.2.1 GridShib for Globus Toolkit The GridShib Profile is designed for a GridShib for Globus Toolkit is a plug-in for standalone attribute requester, that is, an Globus Toolkit 4.0. Its primary purpose is attribute requester that does not participate to obtain attributes about a requesting user in a Shibboleth browser profile. from a Shibboleth attribute authority (AA) Consequently, the Grid SP does not have and make an access control decision based access to an opaque handle typically issued on those attributes. The plug-in implements by the IdP on the front end of the browser a policy decision point (PDP) based on profile. In lieu of a handle, the Grid SP uses attributes obtained from the AA. A policy the DN obtained from the client’s proxy information point (PIP) does the actual work certificate. of requesting attributes. The separation between PIP and PDP allows the plug-in to The primary use case we consider here is a be used in flexible ways within the toolkit’s Grid Client that already possesses an X.509 authorization framework. end entity certificate (EEC). As is often the case in grid-based scenarios, the established 3.2.2 GridShib for Shibboleth user uses their EEC to generate a proxy certificate as part of single sign-on. The GridShib for Shibboleth is a name mapping proxy certificate is subsequently used to plug-in for a Shibboleth 1.3 identity authenticate to Grid SPs as part of the act of provider. Its main purpose is to allow the requesting service. servicing of attribute queries from Grid SPs based on the user’s X.509 Subject We therefore make the following distinguished name (DN). The plug-in assumptions: allows the attribute authority to map the • The Grid Client and the Grid Service user’s DN to a local principal name. Upon Provider (SP) each possess an X.509 receiving an attribute query, the Shibboleth credential. attribute authority uses this plug-in to map • The Grid Client has an account with a the DN and utilizes the resulting principal Shibboleth Identity Provider (IdP). name to resolve attributes. • The IdP is able to map the Grid Client’s The name mapping is a memory-bound X.509 Subject DN to one and only one collection of name-value pairs. The name user in its security domain. (key) is a canonicalized DN that conforms to • The IdP and the Grid SP each have been RFC 2253 [43]. The value is the local assigned a globally unique identifier principal name. The collection is initialized called a providerId. when the Identity Provider starts up. The • The Grid SP and the IdP rely on the current implementation of the name same metadata format and exchange this mapping construct is file-based, that is, the metadata out-of-band. mapping entries are read from an ordinary text file. This text file is similar to the grid- The GridShib protocol flow, depicted in mapfile used by Globus Toolkit. Figure 1, consists of the following four (4) steps. 3.2.3 GridShib Profile Step 1 is the beginning of a normal grid The GridShib Profile is an extension of the request/response cycle. As usual, the Grid Shibboleth Attribute Exchange Profile [3]. Client authenticates using their X.509 Shibboleth supports a framework for credentials to the Grid service provider. The consuming Grid SP metadata whereby the Grid SP authenticates the request and metadata file includes an extracts the client’s DN from the credentials. EntityDescriptor element for each Grid At step 2, the Grid SP formulates a SAML SP that the IdP trusts. SAML 2.0 does not attribute query whose NameIdentifier define a role for Grid SPs, however, so an extended role of type element is the DN extracted from the AttributeRequesterDescriptorType has client’s certificate in step 1. The Grid SP been specified [37] for use with this profile. uses its X.509 credential to authenticate to The defined role of each such entity is the AA. basically that of a standalone attribute At step 3, the IdP, or more specifically the requester. attribute authority component of the IdP, authenticates the attribute request, maps the 3.2.4 GridShib Software DN to a local principal name using the plug- Beta software that implements the GridShib in described earlier, retrieves the requested Profile is available for download from the attributes for the user (suitably filtered by GridShib web site [14]. Source code is normal Shibboleth attribute release policies), available, licensed under the Apache formulates an attribute assertion, and sends License, Version 2.0. the assertion to the Grid SP. Finally, at step 4, the Grid SP parses the 3.2.5 Current Implementation attribute assertion, caches the attributes, Limitations makes an access control decision, processes While we believe our current the client request (assuming access is implementation to be sound from a security granted) and returns a response to the Grid perspective, the following administrative Client. limitations are recognized: • The file-based name mapping doesn’t IdP scale. The fact that the DN-principal name pairs are read from a file is a major C concern. Even if we were to provide l administrative tools to manage the name mapping files, the overhead associated i 2 3 with this maintenance would be e prohibitive for large user communities. n Clearly, this overhead must be t eliminated or at least reduced. 1 • IdP discovery must be generalized. In Grid SP step 1 of the flow, we assume that a 4 single IdP can assert attributes for all Grid Clients making requests of a Grid Figure 1 GridShib Protocol Flow Service. A mechanism to allow a Both the IdP and the Grid SP rely on mapping between a user and their SAML 2.0 metadata [4] for their trust preferred IdP is needed. configuration (i.e., the certificates and public • Metadata production and distribution keys of the other entity). GridShib for needs to be automated or simplified. Trust in a GridShib deployment is based on a bilateral arrangement between IdP and Grid SP. By virtue of the fact that the two entities exchange and consume each other’s metadata, a trust relationship is established. The problem is that n entities give rise to O(n2) bilateral relationships, which does not scale well. 3.3 Globus Toolkit Authorization Framework Figure 2. Attribute Collection Framework As the Globus Toolkit is used by many different projects and by many different Furthermore, the attributes can arrive at the Grid communities, it is clear that it cannot service in a number of different ways. Some mandate the use of particular technologies attribute assertions are “pushed” by the and mechanisms. Specifically in the area of requester, as in VOMS [10] or CAS [34], attributes and authorization policies, the where the assertion is bundled with the toolkit has to be very flexible to client request. accommodate local preferences regarding Other attributes are “pulled” by the service assertion formats and usage patterns. from attribute services, like LDAP, SAML- This section enumerates the many certificate compatible services like the Shibboleth and assertion mechanisms that the toolkit Attribute Authority, or the Handle System. has to support. It also describes an attribute Note that each of the pull mechanisms uses collection and authorization framework that different protocols. deals with the different mechanisms in a Lastly, attributes can also be locally stored consistent manner and that is able to in (configuration) files on the service side. combine authorization decisions from many different sources to yield a single access The validation of the attribute binding is decision for the invocation request. also dependent on the assertion format and how the information was received. Some 3.3.1 Attribute Collection attribute bindings are asserted through public key signatures, while others are When a client invokes a request to a service, received unsigned but embedded in that service may have to consider many protected messages or received over different identity and attribute formats, like authenticated channels. X.509 EECs, X.509 attribute certificates, SAML attribute assertions, LDAP attributes, Finally, the attribute names and values have Handle System [16] attributes, and to be considered within the context of their configuration properties. definition as well as the context of the issuer. Besides the vocabulary, semantics, As it is very common that requests by a and ontology that apply to the attribute client are made on behalf of other parties, bindings, it is also important to understand some of those attribute values do not clearly whether the assertion is only valid in necessarily apply to the requester, but rather the local context of the issuer or in a global to other entities in the delegation chain. context that requires additional authorization assertions, which have to be evaluated by during the validation process. the resource owner. Proxy certificates are essentially examples of such authorization In order to manage the attribute collection in assertions. CAS uses SAML authorization a consistent manner, the Globus team is in decision assertions that are either embedded the process of developing a framework in proxy certificates or communicated in the depicted in Figure 2. Its purpose is to accept SOAP header. and validate the various attribute assertion formats and mechanisms, to group all the There are many different mechanisms and attributes that apply to the same entity languages used to express authorization together, to translate the names and values policies, like grid-mapfiles, proxy into a single format, and finally to make the certificates, SAML authorization decision attribute collections available to the assertions, CAS policy rules, XACML subsequent authorization decision policy statements, PERMIS policies, and processing phase. simple ACLs. Note that the collected identity and attribute 3.3.2 Authorization Mechanisms values have to be available for the As was the case for attribute collection, the authorization policy evaluations. processing of the authorization policy enforcement is a similar challenge because 3.3.3 Authorization Decision of the fact that many formats and Evaluation mechanisms have to be supported. The After all the attributes and authorization applicable authorization policy can come assertions are collected, and internal and from many different sources, like the external authorization services are resource owner, the resource domain, the identified, the authorization decision for the requester, the requester’s domain, the virtual access request can be determined. organization, or intermediaries. In order to be able to deal with different Authorization decisions can be evaluated authorization mechanisms, the authorization within the same hosting environment as the framework uses a PDP abstraction having policy enforcement point, or can be the same semantics as the one defined in evaluated by external authorization services. XACML, requiring that each authorization External policy decision points (PDPs), like mechanism provides a PDP interface to the PERMIS [48], are accessed through the framework, each having its own custom SAML 1.1 authorization query protocol or decision evaluator that understands the by using the SAML 2.0 Profile of intrinsic semantics of the policy expressions. XACML v2.0 [11]. The PDP abstraction allows the framework We have the common delegation-of-rights to use a common interface to interact with scenario where one subject can empower the different mechanism-specific others to work on her behalf through the authorization decision evaluators, keeping issuing of policy statements. As a the mechanism-specific evaluations consequence, there can be multiple policies encapsulated. This common interface is and decisions that have to be combined to mimicked after the XACML request context yield a single decision about the access interface, which essentially presents the rights of the requester. decision request as a collection of attribute The requester can push some of these policy values for the subject, resource and action. statements or decisions as authorization The PDP’s evaluated decision result can have the values of permit, deny or not- languages used in the authorization applicable. Note that the PDP’s decision is mechanisms. associated with either the issuer of the policies that were evaluated or with the 3.3.4 Current and Future GT identity associated with an (external) Support authorization service. The currently shipping GT 4.0 For each received authorization assertion implementation includes a simplified and for each authorization service, a version of the described attribute collection mechanism-specific PDP instance is created. and authorization framework, but does not As each of those PDP instances is queried fully support attribute-based authorization through the same interface to evaluate and has no support for fine-grained authorization decisions, the mechanism- delegation of rights. It includes support for specific details are all hidden behind the proxy certificate delegation, call-out support abstraction. to SAML 1.1-compliant authorization services, grid-mapfile authorization, and an XACML evaluator. Enhancements to support Shibboleth and SAML attribute assertions have been added as part of the GridShib effort, and are included in the GridShib beta release. The full-featured authorization framework is under active development, has produced a Figure 3. Authorization Framework with number of prototypes, and will ship with our PDP Abstraction of Authorization next major release GT 4.2. Mechanisms As shown in Figure 3, a separate Master 4 Next Steps PDP abstraction is used to combine all the In this section we discuss our plans for work different decisions from the various PDP in the forthcoming year for enabling the instances in such a way that a single seamless integration of Shibboleth/SAML decision reflects the overall evaluated and Grid Security/X.509. policy. In essence, this Master PDP queries the different PDP instances about the access 4.1 GridShib rights of the requester and potential The limitations noted in the previous delegates, and searches for valid delegation sections are being addressed. First of all, the decision chains that originate from the file-based name mapping system will be resource owner’s policy and end with a augmented with a database implementation. statement that speaks to the access rights of This will not solve the maintenance the requester. The existence of such a valid problem, but it will make it easier to provide delegation chain essentially states that the administrative tools. A database expressed delegation is allowed. implementation will also facilitate the load- Note that through the use of PDP balancing of IdPs. (Load-balancing a cluster abstractions, the framework is able to of IdPs is an ongoing issue in the Shibboleth evaluate decisions about delegated access Project. We do not want to exacerbate this rights for the requester, without the need for problem.) explicit support of delegation in the policy One approach to the IdP discovery problem 4.2 Need for Name Binding is to include the IdP providerId in the user’s In the simplest case, access to a grid service X.509 certificate itself. Thus we are is managed by providing all users with an planning a modification to MyProxy that X.509 end entity certificate (EEC) from a produces certificates containing this recognized CA, mapping the names in these information. For this to work, we assume EECs to another namespace local to the grid initially that MyProxy resides in the same service, and using these local names in security domain as the IdP. Further work access control lists. GridShib provides a will attempt to relax this restriction. means of augmenting this approach to As mentioned earlier, metadata is an identity-based access control with an important aspect of GridShib (or any attribute-based capability: attributes bound federated identity management system, for to the distinguished name in the EEC are that matter). Therefore the following marshaled using Shibboleth and filtered enhancements are being considered: through an access control policy to determine access to the grid service. • provision attribute release policies (ARPs) from Grid SP metadata; To broaden the availability of the grid • consume IdP metadata and provision service to more users, additional naming Grid SP configuration; and authorities may be recognized. In particular, • produce SP metadata from the we wish to enable use of established naming underlying Grid SP configuration. authorities, such as those local to a user’s home organization, and authentication On the IdP side, tools to produce and tokens other than X.509 EECs. However, we consume metadata are being designed. In are constrained by the requirement that an particular, a tool to automatically produce EEC must be presented to the grid service, IdP metadata would be very helpful. (Other and that only attributes correlated with the projects such as [26] are working on ARP distinguished name in that EEC can be tools that could take advantage of the marshaled. attribute requirements called out in SP This presents two problems. One is the metadata.) Similar tools for the Grid SP are exchange of an original authentication token being developed. for a suitable EEC to be presented to the Testing a classic, browser-based Shibboleth grid service, which is treated elsewhere in deployment remains a challenge. Testing this article. The other is mapping the GridShib on top of Shibboleth is even more distinguished name in this EEC to the name difficult. To address this problem, we in the original authentication token, called provide a command-line testing tool that the principal name, so that attributes bound tests both a Shibboleth AA and a GridShib to the principal name can be marshaled by AA. A discriminating test strategy is being the grid service. Because the principal built around this tool. namespace is not local to the grid service, and to support pseudonymous access To further simplify testing, centralized test scenarios, we propose to collocate this services will be deployed. For example, we distinguished name to principal name hope to stand up an on-line GridShib IdP mapping function with the authority for the that new Grid SP deployments can leverage principal namespace and the attributes that for testing purposes. are bound to principal names. This will replace the grid-mapfile associated with the Shibboleth IdP in the initial GridShib beta product and will also support dynamic binding of principal names to distinguished names in EECs in a manner that enables the Shibboleth AA to map the distinguished name back to its principal name, enabling it to provide attributes for that principal. 4.3 Direct Client-server Use Case There are two distinct but equally important scenarios in which this name binding must take place. In the first scenario, which we discuss in this section, the client application communicates directly with the service. The Figure 4: Different namespaces involved in an second scenario, which we discuss in the integrated MyProxy/Grid Service/Shibboleth next section, involves a web portal transaction. The principal name used for intermediary. authentication (at left) must be transmitted When the client application and service and used for attribute retrieval (upper right). communicate directly, end-to-end X.509 We note that this approach has a distinct authentication is performed as part of the advantage over the current implementation protocol (which is either based on TLS or in that the Shibboleth AA does not need to SOAP with message-level security based on maintain a DN-to-principal name mapping WS-Security [29]). The difficulty in this since the principal name is in the SAML case is binding the identifier in the user’s query. X.509 credential back to the principal name so that attributes may be obtained. One approach is to use CryptoShibHandle [6], a modified Shibboleth handle that In this case, we believe that the online CA encrypts the principal name (along with a functionality in MyProxy (described in nonce and expiration time) into the handle section 3.1) can be used to solve this itself. Encryption relies on a symmetric key problem. As shown in Figure 4, the user shared with the Shibboleth Attribute obtains short-lived X.509 credentials Authority. Used in combination with a non- initially by authenticating to the MyProxy identifying X.509 DN, CryptoShibHandle online CA using their principal name and preserves privacy by concealing user password.3 The MyProxy CA would then identity from the Grid service. issue the X.509 credential, embedding into it the user’s principal name. The service would An open issue is the appropriate mechanism then extract the principal name and use it for embedding the principal name into the when communicating back to the Shibboleth X.509 certificate. Current options being Attribute Authority. considered are to use the Subject Alternate Name or the Subject Information Access extension (sections 4.2.17 and 4.2.2.2 of [18] respectively). One could also embed the 3 We use “password” here generically to indicate a principal name into the DN itself (in fact the static or one-time password, Kerberos credential, or LionShare security profile [22] specifies any shared secret. precisely this), however we are concerned about placing requirements on the contents As in the previous section, these X.509 of the DN. credentials would have the principal name, taken from the NameIdentifier element We also note that it would be desirable to embed the providerId of the Shibboleth in the SAML assertion, embedded in them. Attribute Authority in the proxy certificate, This would allow the Grid service to query allowing the Grid service to easily locate the the SAML Attribute Authority in an Attribute Authority. This solves the IdP identical manner as described previously. discovery problem discussed earlier 5 Conclusions 4.4 Portal Use Case We have presented recent results from the The other use case mentioned in the GridShib and MyProxy projects. The goal of previous section involves the client using a both projects is to ease PKI deployment web browser to access a web server, which costs by leveraging existing site in turn accesses Grid services on behalf of infrastructure for the establishment of multi- the client. This use case is becoming more domain PKIs to facilitate policy common as a means to allow for easy access enforcement. to Grid services with a minimal footprint installation on the client system. 6 Acknowledgments The primary observation in this case is that The GridShib work is funded by the NSF the portal effectively functions as a “chasm” National Middleware Initiative (NMI awards that must be bridged. Either X.509 or 0438424 and 0438385). Opinions and Shibboleth/SAML can be used to recommendations in this paper are those of authenticate to the portal, but neither has a the authors and do not necessarily reflect the delegation method that allows for the views of NSF. delegation of authority from the user of a The MyProxy work was funded by the NSF web browser to a portal (see, however, NMI Grids Center and the NCSA NSF Core recent work of Cantor [5]). This is the so- awards. The online CA work was called n-tier problem (n > 2), an active implemented at LBNL. research area. We thank the Internet2 Shibboleth We note that MyProxy has been used development team for their continued traditionally in the Grid community to cooperation. enable a portal to use a client’s username and password to obtain X.509 credentials for “Globus Toolkit” is a registered trademark the client. Recent work [25] has also shown of the University of Chicago. that this can be extended to web single sign- “Shibboleth” is a registered trademark of on using PubCookie [35]. We believe this Internet2. approach can be adapted to allow Shibboleth-issued SAML authentication 7 References assertions to be used to obtain X.509 1. Basney, J., Humphrey, M., and Welch, V. credentials from MyProxy4. "The MyProxy Online Credential Repository," Software: Practice and 4 The newly formed “ShibGrid” projects, ShibGrid and we expect to collaborate on or leverage their and SheBangs, sponsored by the UK Joint work in this area. Information Systems Committee has similar goals Experience, Volume 35, Issue 9, July 2005, 15. Gutmann, P. Plug-and-play PKI: A PKI your pages 801-816. Mother can use. Presentation given at the 2. Box, D. et al. Simple Object Access 12th USENIX Security Symposium, Protocol (SOAP) 1.1. W3C Note 08 May Washington, 2003. 2000. http://www.w3.org/TR/2000/NOTE- 16. The Handle System, http://www.handle.net/, SOAP-20000508/ 2005. 3. Cantor, S. et al., Shibboleth Architecture: 17. Hodges, J. et al. Glossary for the OASIS Protocols and Profiles. Internet2-MACE, 10 Security Assertion Markup Language September 2005. Document ID internet2- (SAML) V2.0, OASIS Standard, 15 March mace-shibboleth-arch-protocols-200509 2005. http://shibboleth.internet2.edu/docs/internet2 18. Housley, R., Polk, W., Ford, W., and Solo, -mace-shibboleth-arch-protocols-latest.pdf D., Internet X.509 Public Key Infrastructure 4. Cantor, S. et al., Metadata for the OASIS Certificate and Certificate Revocation List Security Assertion Markup Language (CRL) Profile. RFC 3280, IETF, April 2002. (SAML) V2.0. OASIS SSTC, 15 March 19. Hughes, J. et al. Technical Overview of the 2005. Document ID saml-metadata-2.0-os OASIS Security Assertion Markup http://www.oasis- Language (SAML) V1.1. OASIS, May open.org/committees/security/ 2004. 5. Cantor, S. SAML 2.0 Single Sign-On with 20. International Grid Trust Federation, Constrained Delegation. Working Draft 01, http://www.gridpma.org/, 2005. 1 October 2005. Document ID draft-cantor- 21. Internet2 Middleware Architecture saml-sso-delegation-01 Committee for Education (MACE) http://shibboleth.internet2.edu/docs/draft- http://middleware.internet2.edu/MACE/ cantor-saml-sso-delegation-01.pdf 22. The LionShare Project 6. CryptoShibHandle https://authdev.it.ohio- http://lionshare.its.psu.edu/main/ state.edu/twiki/bin/view/Shibboleth/CryptoS 23. Maler, E. et al., Bindings and Profiles for the hibHandle OASIS Security Assertion Markup 7. DOEGrids Certificate Service, Language (SAML) V1.1. OASIS, http://www.doegrids.org/ September 2003. 8. http://ecl.iat.sfu.ca/wsidp/wsidp.pdf 24. Maler, E. et al., Assertions and Protocols for 9. Enabling Grids for E-sciencE (EGEE), the OASIS Security Assertion Markup http://public.eu-egee.org Language (SAML) V1.1. OASIS, 10. EU DataGrid, VOMS Architecture v1.1. September 2003. 2003. http://grid-auth.infn.it/docs/VOMS- 25. Martin, J., Basney, J., and Humphrey, M. v1_1.pdf. Extending Existing Campus Trust 11. Anderson, A. and Lockhart, H. SAML 2.0 Relationships to the Grid through the Profile of XACML v2.0. OASIS Standard, Integration of Pubcookie and MyProxy. 1 February 2005. Document id: 2005 International Conference on access_control-xacml-2.0-saml-profile-spec- Computational Science (ICCS 2005), May os 22-25, 2005. Emory University, Atlanta, 12. Foster, I. Globus Toolkit Version 4: GA, USA. Software for Service-Oriented Systems. IFIP 26. Meta-Access Management System (MAMS) International Conference on Network and http://web.melcoe.mq.edu.au/projects/MAM Parallel Computing, Springer-Verlag LNCS S/ 3779, pp 2-13, 2005. 27. MyProxy Credential Management Service 13. Foster, I., and Kesselman, C. (eds.). The http://grid.ncsa.uiuc.edu/myproxy/ Grid 2: Blueprint for a New Computing 28. Myers, M. et al. X.509 Internet Public Key Infrastructure. Morgan Kaufmann, 2004. Infrastructure Online Certificate Status 14. GridShib: A Policy Controlled Attribute Protocol (OCSP). RFC 2560, IETF, 1999. Framework http://gridshib.globus.org/ 29. Nadalin, A., et. al., Web Services Security: Certificate Profile, RFC 3820, IETF, June SOAP Message Security 1.0 (WS-Security 2004. 2004), March 2004. 43. Wahl, M., Kille, S., Howes, T., Lightweight 30. Novotny, J., Tuecke, S., and Welch, V.. An Directory Access Protocol (v3): UTF-8 Online Credential Repository for the Grid: String Representation of Distinguished MyProxy. In Proceedings of the Tenth Names, IETF, December 1997. International Symposium on High http://www.ietf.org/rfc/rfc2253.txt Performance Distributed Computing 44. Welch, V., Foster, I., Kesselman, C., (HPDC-10). IEEE Computer Society Press, Mulmo, O., Pearlman, L., Tuecke, S., 2001. Gawor, J., Meder, S. and Siebenlist, F., 31. OASIS Security Services (SAML) TC X.509 Proxy Certificates for Dynamic http://www.oasis- Delegation. Proceedings of the 3rd Annual open.org/committees/security/ PKI R&D Workshop, 2004. 32. OASIS Web Services Resource Framework http://middleware.internet2.edu/pki04/proce (WSRF) TC http://www.oasis- edings/proxy_certs.pdf open.org/committees/tc_home.php?wg_abbr 45. Welch, V., Barton, T., Keahey, K., ev=wsrf Siebenlist, F., Attributes, Anonymity, and 33. OpenScienceGrid, Access: Shibboleth and Globus Integration http://www.opensciencegrid.org to Facilitate Grid Collaboration, Proceedings 34. Pearlman, L., Welch, V., Foster, I., of the 4th Annual PKI R&D Workshop, Kesselman, C. and Tuecke, S., A 2005. Community Authorization Service for 46. Welch, V., Siebenlist, F., Foster, I., Group Collaboration. IEEE 3rd International Bresnahan, J., Czajkowski, K., Gawor, J., Workshop on Policies for Distributed Kesselman, C., Meder, S., Pearlman, L., and Systems and Networks, 2002. Tuecke, S. Security for grid services. In 35. Pubcookie: open-source software for intra- Twelfth International Symposium on High institutional web authentication Performance Distributed Computing http://www.pubcookie.org/ (HPDC-12). IEEE Computer Society Press, 36. Scavo, T. et al., Shibboleth Architecture: 2003. Technical Overview. Internet2-MACE, 8 47. Whitehead, G. and Cantor, S., Metadata June 2005. Profile for the OASIS Security Assertion 37. Scavo, T. et al., SAML Metadata Extension Markup Language (SAML) V1.x, for a Standalone Attribute Requester. Committee Draft 01, 15 March 2005. Committee Draft 01, 11 April 2005. 48. D.W. Chadwick and A. Otenko. The 38. Scientific Discovery through Advanced PERMIS X.509 role based privilege Computing (SciDAC), management infrastructure. Future http://www.scidac.org, 2001. Generation Computer Systems, 19(2):277- 39. The Shibboleth Project 289, February 2003. http://shibboleth.internet2.edu/ 40. Skow, D., Use of Kerberos-Issued Certificates at Fermilab. GGF-15 Community Activity: Leveraging Site Infrastructure for Multi-Site Grids. October 3, 2005. http://www.ggf.org/GGF15/presentations/D DS_20051003_kca.ppt 41. TeraGrid Project, http://www.teragrid.org, 2005. 42. Tuecke, S., Welch, V. Engert, D., Pearlman, L., and Thompson, M., Internet X.509 Public Key Infrastructure (PKI) Proxy Identity Federation and Attribute-based Authorization through the Globus Toolkit, Shibboleth, GridShib, and MyProxy Tom Barton1, Jim Basney2, Tim Freeman1, Tom Scavo2, Frank Siebenlist1,3, Von Welch2, Rachana Ananthakrishnan3, Bill Baker2, Monte Goode4, Kate Keahey1,3 1 University of Chicago 2 National Center for Supercomputing Applications, University of Illinois 3 Mathematics and Computer Science Division, Argonne National Laboratory 4 Lawrence Berkeley National Laboratory NIST PKI Workshop, April 4th 2006 Background Globus Toolkit • http://www.globus.org • Toolkit for Grid computing – Job submission, data movement, data management, resource management • Based on Web Services and WSRF • Security based on X.509 identity- and proxy-certificates – May be from conventional or on-line CAs NIST PKI Workshop April 4, 2006 3 Grid PKI • Large investment in PKI at the international level for Grids – Dozens of CAs, thousands of users • International Grid Trust Federation – http://www.gridpma.org • Intended for point-in-time authentication – As opposed to, e.g., document signing • Uses RFC 3820 Proxy Certificates for delegation and single-sign on • Keys stored in Highest Common Technology == User’s local filesystem NIST PKI Workshop April 4, 2006 4 Shibboleth • Internet2 project • Standards-based (SAML) • Allows for Identity Federation – Identity == Identifier + Attributes – Identifier may or may not be a persistent Name. – Allows for pseudonymity via temporary, meaningless identifiers called ‘Handles’ • Allows for inter-institutional sharing of web resources (via browsers) – Provides attributes for authorization between institutions • Being extended to non-web resources NIST PKI Workshop April 4, 2006 5 MyProxy • The Team: – Jim Basney (lead), Bill Baker, Patrick Duda, Von Welch • Many contributors – E.g. Monte Hall (LBNL) • A service for managing X.509 PKI credentials – A credential repository – Long-lived private keys never leave the server • Originally, a method for delegating credentials to Web Portals – Work around for lack of delegation in Web Browsers – User delegates RFC 3820 Proxy Certificate to MyProxy, Portal delegates from MyProxy • Open Source Software – Included in Globus Toolkit 4.0 and CoG Kits – C, Java, Python, and Perl clients available NIST PKI Workshop April 4, 2006 6 GridShib • NSF NMI project to allow the use of Shibboleth-issued attributes for authorization in NMI Grids built on the Globus Toolkit – Funded under NSF NMI program • GridShib team: NCSA, U. Chicago, ANL – Tom Barton, Tim Freemon, Kate Keahey, Raj Kettimuthu, Tom Scavo, Frank Siebenlist, Von Welch • Working in collaboration with the Internet2 Shibboleth Design team NIST PKI Workshop April 4, 2006 7 Common Goals of GridShib and MyProxy • Ease of use for Grid PKIs • X509 Credential management is a big headache for all involved – Users hate process of getting certificates – Admins hate not know where private keys are – Everyone hates configuration overhead (mainly CRLs) • Both projects working to use federation combined with X509 to solve these problems • Integration of Site with Grid security NIST PKI Workshop April 4, 2006 8 Results from Past Year MyProxy Authentication • MyProxy has traditionally supported: – Key Passphrase – X.509 Certificate for credential renewal • In the past year, we have added: • Pluggable Authentication Modules (PAM) – Kerberos password – One Time Password (OTP) – Lightweight Directory Access Protocol (LDAP) password • Simple Authentication and Security Layer (SASL) – Kerberos ticket (SASL GSSAPI) • PubCookie NIST PKI Workshop April 4, 2006 10 MyProxy Online Certificate Authority • Issues short-lived X.509 End Entity Certificates – Leverages MyProxy authentication mechanisms – Compatible with existing MyProxy clients • Ties in to site authentication and account management – Using PAM and/or Kerberos authentication – “Gridmap” file maps username to certificate subject • LDAP support for mapping • Avoid need for long-lived user keys • Server can function as both CA and repository – Issues certificate if no credentials for user are stored • When combined with pluggable authentication, allows for easy way to leverage existing authentication for X509 access – Kx509/KCA replacing Kerberos with various technologies • (Implemented by Monte Goode @ LBNL) NIST PKI Workshop April 4, 2006 11 MyProxy: Managing Trust Roots • Based on ideas put forth in Gutmann’s plug-and-play PKI paper • When user authenticates to get X509 credential, also provide needed trust information – CA certificates, CRLS, other related policy NIST PKI Workshop April 4, 2006 12 GridShib Overview • Two components – GridShib handlers for Globus Toolkit (GT4) – GridShib plugin for Shibboleth (1.3) • Working together they allow GT service to request Shibboleth attributes • And make authz decision based on those attributes • All software open source NIST PKI Workshop April 4, 2006 13 GridShib for Globus Plugin • Three components • Basic SAML Query Policy Information Provider (PIP) – Queries Shibboleth AA using X509 DN and retrieves user attributes – Needs GridShib for Shibboleth plugin at AA • SAML identity mapper PIP determines local username from SAML attributes • SAML PDP makes access control decision based on SAML attributes NIST PKI Workshop April 4, 2006 14 GT Authorization Architecture • GridShib work is forming basis for rich authorization architecture in GT • Configurable collection of PIPs gather attributes regarding user – SAML, X509, local, etc. – Canonicalize to XACML Request Context • Configurable collection of PDPs render authorization decision – PDPs can be local or remote (GGF OGSA-Authz SAML protocol) – PDPs can be combined logically in different ways (AND or OR) – PDPs can gather own attributes (e.g. PERMIS) NIST PKI Workshop April 4, 2006 15 GridShib for Shibboleth Plugin • NameMapper for Shibboleth IdP • Converts X509 DN into locally meaningful name • Currently uses static mapping – Already being improved on NIST PKI Workshop April 4, 2006 16 GridShib Flow: Putting it together • User makes request of GT service as usual – X509 authentication with SOAP • GT SAML PIP queries Shibboleth AA using DN – SAML Query protocol • GridShib Namemapper converts from DN to local principal name • Shibboleth AA returns SAML assertion with attributes – SAML Response protocol • GT SAML PIP binds attributes to DN in GT internal state • GT then maps user to local account and/or renders access control decision NIST PKI Workshop April 4, 2006 17 Next Steps GridShib/MyProxy Integration • Allow for leveraging of Shibboleth SSO for Grids – Need to convert Shibboleth SAML into X509 • Accomplish by adding SAML authentication support to MyProxy – Ala Pubcookie • Have implemented prototype GridShib CA – Portal authenticates user, MyProxy trusts portal to have done so and issues X509 Credential – Java Web Start application download credential from portal to user desktop • Investigating full Shibboleth authentication to MyProxy – May have to wait until Shibboleth 2.x NIST PKI Workshop April 4, 2006 19 The Name Mapping Problem • End-to-end flow involves both protocol and name conversion – Site, SAML, X509 • Not clear that these conversions should be co-located, who should be authoritative NIST PKI Workshop April 4, 2006 20 Name Binding • If site is authority for both SAML and X509 names, then they can make mappings or use algorithmic transformation • Today this is often not the case – E.g. CA is run by Grid community • Two options we’re exploring: • User binds names by dual-authentication • CA binds names when it issues a credential – Either by direct communication with Shibboleth AA • Allow Shibboleth AA to recognize DN – Or by embedding information into the X509 certificate • Allows resource to know Shibboleth Name • Working in collaboration with Jill Gemmill, J.P. Robinson @ UAB (myVocs) NIST PKI Workshop April 4, 2006 21 Questions? • vwelch@ncsa.uiuc.edu • Project URLS – http://gridshib.globus.org – http://myproxy.ncsa.uiuc.edu – http://shibboleth.internet2.edu/ • Acknowledgements – The GridShib work is funded by the NSF National Middleware Initiative (NMI awards 0438424 and 0438385). Opinions and recommendations in this paper are those of the authors and do not necessarily reflect the views of NSF. – The MyProxy work was funded by the NSF NMI Grids Center and the NCSA NSF Core awards. The online CA work was implemented at LBNL. NIST PKI Workshop April 4, 2006 22 PKI Interoperability by an Independent, Trusted Validation Authority Jon Ølnes DNV Research, Veritasveien 1, N-1322 Høvik, Norway jon.olnes@dnv.com Abstract. Interoperability between PKIs (Public Key Infrastructure) is a major issue in several electronic commerce scenarios. This paper suggests an approach based on a trust model where an independent Validation Authority (VA) replaces Certification Authorities (CA) as the trust anchor for the receiver of a PKI certificate (the Relying Party, RP). By trusting the VA, the RP is able to trust all CAs that the VA can answer for. The main issue is not technical validation of the certificates but assessment of quality, trustworthiness and risk related to certificate acceptance. The RP obtains a one-stop shopping service – one point of trust, one agreement, one bill, one liable actor, which may be beneficial for some business processes. partners in more than 100 countries all over the world. 1. Introduction As an RP, DNV must be able to assess the risk related Public key cryptography used with a PKI (Public Key to acceptance of certificates from in most cases several Infrastructure) carries the promise of authentication, CAs per country. In our work on the interoperability electronic signatures and encryption based on sharing problem, DNV has concluded that a different approach of only non-secret information (public keys, names and is best suited to address these concerns, where other information in certificates1). The same interoperability is offered by means of an independent information (the certificate) may be shared with all Validation Authority (VA). counterparts, to replace separate, shared secrets. The idea of a VA is not new, but in our approach, The requirements on a counterpart (RP for Relying the VA replaces CA(s) as the trust anchor for the RP. Party – relying on certificates) are that it must be able In common PKI practice, the trust model is reversed: a to validate the authenticity and integrity of the VA is delegated trust from the CAs it handles, and only certificate and interpret the certificate’s content. The CAs may be directly trusted. RP also needs to assess the risk related to acceptance In our trust model, it is important that the VA is of the certificate, determined by the quality of the neutral with respect to CAs, i.e. the VA service must certificate, the trustworthiness of the issuer (the CA – be offered by an independent actor. A VA should be Certification Authority), the liabilities taken on by the able to answer for validity, quality and liability related CA, and the possibilities for claiming liability in case to certificates issued by “any” CA, thus providing RPs of mistakes by the CA; all related to the security and with the necessary information for their risk business requirements of the operation in question. assessment. The requirement for independence with In this picture, PKI interoperability is an important respect to CAs particularly applies for quality issue. An RP may need to accept certificates from a classification. VA services may additionally cover large number of PKIs. Consider DNV as an example: verification of signed documents (not only certificates) DNV is an international company with customers and and may be extended to notary (trusted storage) and various related services [23]. A VA service may be general (“one size fits all”) or 1 Another term is “electronic ID”. A PKI-based electronic ID customisable. Customisation may consist of defined usually consists of two or three certificates and quality profiles per RP and/or explicit specification of corresponding key pairs, separating out the encryption (key criteria (e.g. nationality) for CAs that shall be trusted negotiation) function and possibly also the electronic or not by the specific RP. signature (non-repudiation) function to separate key pairs/ certificates. To a user, this separation is normally not In the following, we clarify DNV’s position in 2, visible. This paper uses the term “certificate”, to be describe requirements in 3, review existing approaches interpreted as covering the electronic ID term where in 4, describe the independent VA in 5, and look closer appropriate. on the commercial and legal issues for a VA in 6. We Service providers as RPs may want to solve this conclude in 7. situation unilaterally by requiring use of a certain PKI by its counterparts. This may be unacceptable to a 2. DNV’s Position and Role counterpart (be that an individual customer or a business partner) that already has a certificate, and that DNV (Det Norske Veritas, http://www.dnv.com ) is an does not want to acquire another one (or several more independent foundation offering classification and if different RPs pose such requirements). certification services from offices in more than 100 countries. The maritime sector and the oil and gas 3.2 PKI Deployment and International Aspects industry are the main markets. DNV is also among the PKIs are deployed in various contexts: Society world’s leading certification bodies for management infrastructures for the general public (individuals, but systems (ISO 9000, ISO 14000, BS 7799 and others), also for businesses), corporate infrastructures (business delivering services to all market sectors. internal), and community infrastructures (for particular DNV seeks to extend its existing position as a purposes, e.g. banking). Interoperability is relevant supplier of trusted third party services to digital where communication requires use of certificates communication and service provisioning. The first across infrastructures. version of a VA service along the lines described in PKIs as society infrastructures are being deployed in this paper will be offered to pilot customers mid-2006. probably most developed countries for national This paper does not describe this pilot service but electronic IDs. Society infrastructures cover at least rather the research leading to the decision to launch the individual citizens but may also cover businesses and pilot service. individuals in the role of employees. The infra- structures are either based on PKIs run by public 3. Requirements for Interoperability authorities or on services obtained from the com- mercial market. Society infrastructures are almost 3.1 The PKI Interoperability Challenge exclusively national, although some international co- The PKI interoperability challenge can be described ordination takes place. Notably, the EU Directive on from two viewpoints: electronic signatures [7] defines the concepts of − A certificate holder should be able to use the qualified signatures/certificates as means to achieve certificate towards all relevant counterparts, legal harmonisation across the EU in this area. regardless of the PKI used by the counterpart. Even in countries with (plans for) public authority − An RP should be able to use and validate certificates PKIs, the usual situation is several (2-15 is typical for from all relevant certificate holders, regardless of European countries) public, commercial CAs com- the PKI used by the certificate holder. peting in a national market. While PKI interoperability The word “relevant” is the key to the severity of the thus may be a challenge even at a national level, the interoperability challenge. In many cases, the set of scaling may be manageable. However, interoperability relevant counterparts is limited by such criteria as at an international level remains a severe challenge. nationality, business area, application area (e.g. The topic is on the agenda. In Europe, inter- banking) or any other criteria that an actor may find operability of certificates and electronic signatures is relevant. CAs may also put restrictions on use of identified as a key issue in creating an internal market2 certificates. Note however: in the EU. One example is the IDABC (Interoperable − Unlimited interoperability may be viewed as the Delivery of European E-government Services to Public ultimate goal, likened to the ability to make phone Administrations, Businesses and Citizens) program- calls internationally. me’s statement on electronic public procurement [4]: − A service provider as an RP may want to accept “The interoperability problems detected [for qualified certificates from as many CAs as possible, in order electronic signatures] despite the existence of to reach as many customers as possible. standards, and the absence of a mature European − A certificate holder may want to use one certificate market for this type of signatures pose a real and for “any” service internationally. possibly persistent obstacle to cross-border e- − When a digitally signed document is created, the procurement.” Other examples can be found. parties involved may be able to identify the relevant Internationally oriented businesses face the same CAs. However, the document may need to be challenges. Mandatory requirements for signatures are verified later by another actor, who may not have 2 any relationship to any of these CAs. Coined as “the SEEM” (Single European Electronic Market) in EU terms. rare in the private sector but businesses can benefit a Syntactic parsing and checking of validity period are lot from electronic signatures and PKI-based usually straightforward operations. All other steps in authentication. In an increasingly global society, the certificate processing more or less have problems restricting these mechanisms to a national level is too related to scaling, i.e. handling of certificates from a narrow. Solutions are being developed for particular high number of CAs. commercial sectors, such as the SAFE Bridge-CA for Management of information about CAs and their the pharmaceutical industry [16]. The SAFE initiative services (trustworthiness, quality of certificates, lia- shows that groups of actors may manage to work bility, possibility of enforcing liability, and trusted together towards interoperability in international copy of public key) gets increasingly difficult with the communities. number of CAs. The liability situation can in general However, in general the interoperability problem only be safely assessed through agreements, but it remains an issue. If not solved otherwise, the problem would be difficult for an RP to have explicit agree- is left to the individual RP, but an RP acting by itself ments with all relevant CAs. A consortium of RPs, e.g. has a challenge handling the problem with confidence, in an industry sector, may be able to find approaches to i.e. with definable risk. This paper suggests VA diminish the problem. services as a promising approach at solving the The X.509v3 standard [14] defines syntax of interoperability problem. certificates, but leaves many options, and only partly defines semantics of fields, attributes and extensions. 3.3 The Challenges to the RP Even though recommended profiles for X.509 The interoperability challenges are best described from certificates exist, certificates from different CAs often the viewpoint of an RP. With respect to a certificate, differ in content. This particularly applies to naming of the RP must perform: subjects. An RP must either be able to use (parts of) − Parsing and syntax checking of the certificate and its names in a certificate directly for identification, or a contents, including some semantic checking like use name in a certificate must be reliably translated to a of certificate compared to allowed use (key usage derived name that is useful to the RP. The settings) and presence of mandatory fields and security/quality of the translation process must pre- critical extensions. serve the quality of the certificate, i.e. the confidence in − Assessment of the risk implied by accepting the the derived name must be as if the derived name had certificate, determined by the CA’s trustworthiness, been included in the certificate. the quality of the certificate, and the liability 3.4 Legal Issues and Risk situation, relative to the operation in question. − Validation of the CA’s signature on the certificate. An RP must not only be able to validate a certificate, This requires a trusted copy of the CA’s own public but also be able to assess the risk involved in accepting key, either directly available, or obtained from the certificate for a given purpose. This raises legal and further certificates in a certificate path (see 4.1). commercial concerns. − A check that the certificate is within its validity A question which an RP always faces is to know period, given by timestamps in the certificate. For with confidence the liability taken on by the CA, and real-time checking, this must be compared against what recourse the RP has if the CA fails to fulfil its the current time. For old, signed documents, it is the responsibility. An unknown liability situation may time of signing that is of interest. constitute a serious risk. An actor offering an inter- − A check that the certificate is not revoked, i.e. operability service should on one hand be able to take declared invalid by the CA before the end of the liability for its own actions (which on the commercial validity period. For real-time checking, the current side means that it must have sufficient income or revocation status is checked. For old, signed funding to cover the liability), and on the other hand at documents, status at the time of signing is checked. least provide guidance with respect to the liability − Semantic processing of the certificate content, taken by the CAs it covers. Preferably, the inter- extracting information that shall be used either for operability service should take on the CAs’ liabilities presentation in a user interface or as parameters for and be able to transfer these to the responsible CA further processing by programs. The name (or when appropriate, thus providing risk management for names) in the certificate and interpretation of the RPs. naming attributes are particularly important. CA liability is described in certificate policies and may be governed by (national) law. Additionally, − In the case of certificate paths, this processing must be repeated for each certificate in the path (see 4.1). agreements between a CA and RPs may control liability. In an international setting, certificate policies may be written in a foreign language and refer to 4.2 Peer-CA Cross-Certification foreign legislation with respect to the RP, and as cited Practical experience with peer-CA cross-certification above, it would be difficult for an RP to have (mutual recognition) has shown that the effort needed agreements with all CAs on which it may want to rely. is very large, in particular when the CAs are competi- Thus, the RP’s risk situation can be complex. tors. The author was involved in a project where three Current approaches to PKI interoperability may CAs in Norway managed to establish a cross- solve technical problems but they all have challenges certification regime, but repeating this effort is not on the commercial and legal side (see 4). In the context recommended. of a VA, these issues are discussed in 6. Large-scale cross-certification would create trust structures (“web of trust”, similar to the trust model 4. Approaches to PKI Interoperability used by e.g. PGP) that would be particularly complex 4.1 Trust Models and Certificate Paths with respect to path discovery. However, the technical issues are not the most important ones. Present PKI practice focuses on only CAs being Commercially, no CA is really interested in trusted. Given a large number of CAs, direct trust in solutions that improve market access for its each of them by an RP (trust list approach, see 4.5) competitors. Cross-certification may be tempting in becomes difficult. Present approaches seek to solve the cases where both CAs can gain from an increased scaling problems by trust structures among the CAs: market. In other cases, the commercial incentive peer-CA cross-certification (mutual recognition), simply does not exist, and the attitude will be to refrain hierarchy, or bridge-CA. Hybrid models are possible from cross-certification if possible, i.e. unless cross- but are not discussed in depth in this paper. certification is imposed by e.g. national authorities. Trust structures are created by issuance of certifi- Cross-certification with policy mapping means that cates to the CAs themselves; by peer-CAs, a bridge- the two CAs’ services are regarded as equal with CA, or a CA at a higher level of a hierarchy. The idea respect to quality. The complexity involved in the is that an RP should be able to discover and validate a policy mapping depends on the differences in the certificate path from a directly trusted CA (typically policies. There are a few common frameworks [3] [5] the root-CA of a hierarchy) to any CA (may be [6] for structuring of policies. Mapping between the previously “unknown”) that is a member of the same frameworks is not too complicated, and most CAs trust structure. In this, trust is regarded as a transitive adhere to one of the frameworks. Still, the real content property. The number of CAs directly trusted by an RP of policies may differ quite a lot. can be reduced. Cross-certification may imply that the CAs provide A general comment on trust structures is that guarantees for one another, so that a customer of one certificate path discovery may be a very difficult task CA may claim liability related to certificates issued by [20]. Sufficient support for path discovery is lacking in the other CA. This is governed by the cross- many PKI implementations. Also, certificate path certification agreement, but competing CAs may be validation may be very resource demanding due to the reluctant to enter such agreements. need for repeated certificate processing (the steps On an international level, peer-CA cross-certifi- described in 3.3). Caching of previously validated cation as a scalable solution to interoperability does trust paths can mitigate this problem. have significant challenges. The main use may be in Certificate path validation, possibly also path situations where the CAs are non-commercial, e.g. discovery, may be performed by a validation service corporate PKIs of co-operating businesses. (delegated path validation/discovery [21]). Note that the trust model suggested by this paper (see 5.2) 4.3 Hierarchy eliminates certificate path processing. In a hierarchy, CAs are assembled under a common “Trust” in this context mainly means the ability to root-CA, which issues certificates to subordinate CAs. find a trusted copy of a CA’s public key in order to Although a hierarchy may in theory have an arbitrary validate certificates issued by the CA. To some extent, number of levels, practical systems usually have two trust models can address quality (e.g. by policy levels: root-CA and certificate issuing CAs. mapping) but liability is in practice still left as an issue Hierarchies scale well, but if an indication of quality between the RPs and the individual CAs. of service of CAs shall be implied by the hierarchy, all CAs involved must have equal quality. This is usually enforced by a common base policy defined by the root- CA. A hierarchy consisting of “arbitrary” CAs dif- fering in quality and other policy aspects is theore- bridge-CA [22] based on the study in [11]3. This tically possible but practically infeasible. There is no initiative has only one quality level (presumably only reason to believe in a world-wide hierarchy as the qualified certificates are considered relevant). solution to PKI interoperability. However, hierarchies The FBCA is not liable to any party unless an reduce the number of CAs that must be directly trusted. “express written contract” exists ([9] section 9.8). The weak point in a hierarchy is the root-CA. This Similar limitations exist for the European bridge [22]. part is technically simple, but legally and commercially A commercial bridge-CA, such as the SAFE Bridge- very difficult. Although CAs may be willing to pay CA [16], may take on more liability, but commercially some amount to join a hierarchy, it is not possible to a bridge-CA suffers from the same problems as the gain much income from operating a root-CA. A root- root-CA of a hierarchy: It may be difficult to get an CA may run on governmental or international funding, income from issuance of cross-certificates, and liability or by a limited company jointly owned (cost and risk must usually be balanced by an income. Mainly, sharing) by the CAs beneath the root-CA. Without an liability remains an issue between the RP and the income, the owner of a root-CA, even if it is a individual CAs. governmental agency, will be reluctant to take on much The FBCA does not provide validation services, but liability, and liability may remain an issue between the test suites are defined for path discovery [19] and path RP and the individual CAs in the hierarchy. validation [18] related to the FBCA. A list of products Hierarchies exist; as an example, all CAs (for that have passed the test is found on FBCA’s web site. qualified certificates) approved by the German A bridge-CA might provide directory services and VA government are placed under a root-CA run by the services [15] similar to those described in this paper. Regulatory Authority for Telecommunications and We argue that with such VA services, the bridge-CA Post [2]. functionality is actually obsolete and the VA functio- At an international level, one may devise nality is sufficient. establishment of yet another level in the form of Bridge-CAs have so far either a regional scope (as international root-CAs on top of national root-CAs, or USA or EU) or a defined business scope (may be alternatively cross-certify between (the root-CAs of) international, as for the SAFE Bridge-CA), which hierarchies. Such structures will create complex means that there is a need to link bridge-CAs in order certificate paths, and cross-certification between actors to achieve general, global interoperability, thus crea- that do not take on liability (the root-CAs) may be a ting more complex trust models. The FBCA has questionable approach. A better approach in this case is defined guidelines for such cross-certification (part 3 to use bridge-CAs to connect hierarchies. of [8]). As argued for hierarchies, cross-certification between actors that do not take on liability (the bridge- 4.4 Bridge-CA CAs) may be a questionable approach. A bridge-CA is a central hub, with which CAs cross- 4.5 Trust List Distribution certify. The bridge-CA should be run by some neutral actor, and it shall itself only issue cross-certificates. An A trust list consists of named CAs and their public RP may always start a certificate path to a given CA by keys. All CAs on the list are trusted. An example is the starting at its own root of trust, and then proceed to a list of more than 100 CAs included in distributions of certificate issued by its root to the bridge-CA. For Microsoft OSs. This list contains actors that have been hierarchies, the usual situation is cross-certification willing to pay the necessary fee to Microsoft. CAs may between the bridge-CA and the root-CA. Thus, compli- easily be added to or removed from the list, e.g. to cated certificate paths may occur even when using a introduce national CAs. An RP may manage a trust list bridge-CA. entirely on its own. Cross-certification between a CA and a bridge-CA Trust list management may also be done by a third is considerably simpler than peer-CA cross-certifi- party, which should regularly distribute lists to its cation, as the bridge-CA has no (competing) role in subscribers. Interoperability is achieved by installation issuing of certificates to end entities. of compatible trust lists at all actors. An example [11] Indication of quality may be done by requiring a CA is a list of all (nationally) approved CAs in Europe. to cross-certify with the bridge-CA at the appropriate Quality information about CAs and their services is a quality level. As an example, the Federal Bridge CA (FBCA) in the USA defines five policy levels [9]. In Europe, IDABC has initiated a pilot project for a 3 This study disapproves of a VA solution to interoperability. However, in this case the VA is an OCSP service with few similarities to the VA concept presented in this paper. fairly straightforward extension of a trust list, although Engineering Task Force) [21] but the complexity is this is not offered today. troublesome [20]. The main problems with trust lists are the following: The main problem in our view is that the validation − Liability is still an issue between the RP and the authority resides with the CAs. Below, we describe the individual CA. As for quality information, liability advantages of a decoupling the VA role from the CAs. information may in principle be distributed with the 5.2 Revising the Trust Model for the RP trust list; however the distribution service is unlikely to help in claiming liability. In our view, a fundamental flaw in present PKI practice − We have not seen evaluations on the possibilities of is that a CA is the only actor that can serve as a trust making a trust list distribution service profitable. anchor; i.e. a trust decision must ultimately always be The subscribers will use the service only occasio- linked to a trusted CA. This requirement leads to the nally (regular but infrequent updates, or notification necessity for trust structures and certificate paths in and download upon changes). CAs may be reluctant order to navigate from a trusted CA to an “arbitrary” to pay (there are more CAs outside than on CA. Microsoft’s list). A service run by a publicly funded The CA as the trust anchor is the right model for a agency (national or international) may be an certificate holder, who selects the CA(s) to obtain alternative. certificate(s) from. However, an RP should aim at − Correspondingly, a distribution service will be acceptance of “any” CA’s certificates, regardless of its reluctant to take on much liability for its own relationships to other CAs. service. RPs may download trust lists, and use them This paper instead suggests a trust model where an at their own risk. independent validation authority (VA) is the trust anchor for the RP. Upon trusting the VA, the RP is 5. The Independent Validation Authority able to trust any CA that the VA handles. The VA handles each CA individually, regardless of any trust 5.1 Outsourcing Certificate Validation structure that the CA may participate in. Certificate Certificate processing at an RP may be very resource path discovery and validation are irrelevant (although consuming (see 3.3). This particularly applies to the VA may use such processing internally to aid in certificate path processing and revocation checking by classification and other tasks) since there is no need to use of CRLs (Certificate Revocation List [14]). A more prove a path to a “trusted CA”. efficient revocation checking protocol, OCSP (Online This trust model resembles a two-level hierarchy or Certificate Status Protocol) [17], has been developed to use of a bridge-CA, but the VA does not issue enable outsourcing of the revocation checking part. certificates. It is an on-line service answering requests While OCSP was primarily designed for services from RPs. As opposed to other interoperability provided by one CA, OCSP services that can answer services, an on-line VA may be able to run a profitable about revocation status for certificates from several business by providing real risk management services to CAs are also in use. According to the OCSP the RP. The idea is that the RP is provided with one- specification, such a service must present a certificate stop shopping for validation of certificates: One point from the given CA to prove that it has been delegated of trust, one agreement, one point of billing, one liable responsibility to answer about revocation status. actor. Since OCSP only transfers identification of 5.3 Using a VA Service for Interoperability certificate and issuer, not the complete certificate, the protocol cannot be used to support outsourcing of more Given this trust model, the state of the art in VA of the steps in the RP’s certificate processing. SCVP services may be considerably advanced. The RP (Simple Certificate Validation Protocol) has been outsources all (or parts of, see 3.3) its certificate developed to address this weakness of OCSP and processing to the VA, regardless of the CA that has should be released as a “proposed Internet standard” in issued the certificate. The VA checks validity with the the near future. SCVP allows the complete certificate appropriate CA, but returns its own answer, not an (or even a certificate chain) to be transferred. SCVP answer originating from the CA. The answer includes has been severely delayed, and support for the protocol information on quality, trustworthiness, and liability, seems to be low. Delegated certificate path processing and possibly auxiliary information derived from is envisaged by the PKIX (Public Key Infrastructure certificates. Such information may be other names for X.509) working group of the IETF (Internet the certificate holder (the name in the certificate need not in itself be useful to the RP) or further information related to certificate holder, such as age, sex, or credit check. Auxiliary information may originate from the en two policies. A classification system with just a few CA as well as from other sources, and the information discrete classes may be close to a policy mapping may be general or RP specific. scheme (e.g. the five levels of the FBCA), while a Thus, the VA acts as a clearinghouse for infor- more fine grained classification allows CAs to differ in mation about CAs and their certificates, with a policies but still fit in the classification scheme. Since possibility for further, value-added services. The main agreed quality levels, like qualified level in Europe and feature is support for risk management for the RPs. A FBCA levels in the USA, are regional in scope, a VA may be provided in a “one size fits all” manner, or flexible classification system may be important for it may be configurable to meet requirements of international interoperability. individual customers (RPs). The VA does not remove Note that the documentation only presents the the complexity of interoperability, but it handles the quality and trustworthiness claimed by the CA. A complexity in one place, for all RPs who have classification must include an “evaluation assurance outsourced certificate processing to the VA. Internally, level” to indicate to what degree an assessment of the VA operates a trust list of the CAs it is able to actual operation has been done. Levels may be: self- answer for. assessment by CA (possibly augmented by acceptance of a surveillance authority such as demanded by the 5.4 Classification Related to VA Services EU Directive on electronic signatures [7]), report from As noted, a VA shall not only return an answer about a surveillance agency or a third party auditor, and validity, but also indication of quality, trustworthiness certification (such as BS77994 [1], ISO15408 [12], and liability related to a certificate. ISO9000 etc.). Classification criteria for CAs may be The quality of a CA’s certificates is mainly derived used to develop specific criteria for quality certification from its certificate policy [3] [5] [6]. Trustworthiness is of CAs. The evaluation assurance level may be determined by an assessment of the actor running the incorporated in the quality indication (higher assurance CA, e.g. to confirm that the CA is able to fulfil its implies higher quality) or it may be mediated as a liability in case of errors. Other documentation may separate parameter. also be of relevance, such as certification practice DNV is among the world’s leading actors in statements and agreements with certificate holders and classification and certification, and work is ongoing on other actors (including membership in hierarchies and development of classification criteria and a classifi- cross-certification regimes). Liability is discussed in 6 cation system for CAs in conjunction with VA below. services. At present, we leave open the question of The documentation must be measured against a whether a classification system should be standardised classification system, defined as a set of quality and or be left as a competitive element for a VA. In DNV’s trustworthiness parameters, and criteria for meeting present services, classification may be based on certain levels related to these parameters. In the standards (e.g. certification to ISO 9000 or similar simplest case, the resulting classification may be standards) or competitive (e.g. DNV’s own class rules mediated as a number (say, classes 1-10), but it is also for ships). possible to define data structures in order to mediate a 5.5 A Note on Openness of PKIs more fine grained classification with respect to the parameters. An RP may be allowed to define its A VA is based on the assumption that the CAs provide requirements in the same manner (either as “at or open PKIs. Our basic criterion for technical openness above level x” or “according to the values in this is that an RP should be able to use any standards-based structure”). The VA may compare the RP’s software to process certificates and signed documents. requirements to the classification. The result may be a PKI support is included in almost all platforms, and the yes/no answer or a report on deviations from the RP should be able to base its processing on such built- desired quality profile. A particular classification is in functionality (with enhancements if needed) assessment of compliance with national or regardless of the CA. international legislation, e.g. that requirements for This assumption is unfortunately broken by many qualified certificates/signatures [5] are met. PKIs, which require particular software to be installed Such a classification system resembles policy mapping for cross-certification, but the system is more 4 Information security management is usually developed flexible. The classification system rates certain charac- according to ISO/IEC 17799 [13], which is based on teristics of a CA and its services to obtain either an BS7799 part 1. However, certification is still done overall score or a descriptive structure, whereas a according to BS7799 part 2, since the certification part has policy mapping needs to determine compliance betwe- not yet been approved by ISO. at the RP in order to accept and process certificates and certificate profiles, cryptographic algorithms and documents issued/signed under the PKI. Such PKIs are protocols. in effect closed in that the certificates can only be used − For scaling, a VA must be replicated. Synchroni- between parties that have all installed the software. sation between instances of the VA service and Examples are solutions that require particular Java optimisation of collection of revocation information applets or similar to be transferred from a service and auxiliary information must be in place. provider (the RP) to a certificate holder, and solutions Outsourcing certificate processing to a VA may that use proprietary protocols between certificate improve performance since an optimised and dedicated holder and RP and/or between RP and CA. installation is used at the VA. The avoidance of It is clear that such PKIs cannot properly support certificate path discovery and validation procedures interoperability, since one cannot expect all possible greatly improves speed in cases where this would RPs to install the software. Also, an RP (typically a normally be needed. However, the VA solution must service provider) cannot be expected to install such scale, and performance is influenced by factors like the software related to more than a few PKIs. In some communication link between RP and VA. cases, such software (e.g. to process signed documents) When RPs operating critical services rely on a VA, may be installed at a VA instead of at the RP, but in the VA’s availability must be guaranteed. There are many if not most cases the RP is stuck with the extra two main issues involved: software. We believe that such closed solutions − Availability of the VA towards the RPs. This is eventually must be changed, but in the short to medium similar to availability of other critical systems, and term they will cause a major problem to measures are reliable systems and communication interoperability. links, redundancy, protection against DoS attacks Some CAs require explicit agreements5 with all RPs. and so on. The CA’s policy states that the CA takes no liability − Availability of updated status information from the unless the RP has such an agreement. Large-scale CAs. If a CRL download or an OCSP request fails, interoperability cannot be achieved, as it is not possible the VA must either report an error to the RP or risk to have agreements with every possible RP. A VA may an answer based on the old, cached status sign a “bulk agreement” with such CAs; one agreement information. If a CRL download is too slow, the VA covering all RPs using the VA. This may solve the may also need to answer based on old information. agreement issue, but the CA has to approve the Optimising status information updating is very solution (see also 6.1 below). important, see 5.7. A VA may solve some, but not all, issues related to closed PKIs. However, an approach based on trust 5.7 Interfacing a VA structures and certificate paths cannot solve any of the For the interface between an RP and a VA, today’s issues since the problems are related to processing and standard validation protocol, OCSP [17] clearly has too validation of certificates and signatures, not to path limited functionality. The successor, SCVP, has been discovery and path validation. severely delayed, and support for the protocol seems to 5.6 Implementation, Performance, Availability be low. A better approach, in our opinion, is to provide VA The technical realisation of a VA service is not a services as Web Services. The XKISS part of XKMS central topic of this paper. However, the following [10] is a good starting point for the VA interface. The observations are made: XML documents exchanged with the VA may in the − A VA is an on-line trust service subject to severe future be subject to standardisation. In any case, a VA requirements for availability and security. These should publish its XML specifications in order to requirements are enforced on the software and enable integration software produced by “anyone”. The hardware used as well as on the operational desired level of standardisation may be limited by the environment of the service. heterogeneity of different VA services, and by the − A VA needs to handle the heterogeneity encount- possibility of tailoring VA services to specific ered in the PKI area, including support for various customers. For performance, a VA must optimise gathering of information from CAs (and possibly other sources for 5 This is almost always the case for PKIs that require auxiliary information) and answer requests as far as particular software to be installed. An agreement covers possible based on information cached locally. The both purchase of software and acceptance as an authorised preferred option is CRL download, with OCSP RP. requests to the CA as a fallback alternative. CRL A VA needs a published and carefully tailored download must be configurable and be done by a privacy policy. The VA should gather and store separate process. A polling strategy may be used in personal information only to the extent needed, and all order to catch CRLs issued out of or before schedule. information, including logs, must be subject to Delta-CRLs and CRL push mechanisms should be adequate security mechanisms. In particular, log infor- exploited wherever available. mation must only be available to the correct RP. All interfaces to and from a VA must be secured. The communication links should be protected by use of 6. Commercial and Legal Issues, Liability SSL (or similar means), and it must be possible to sign requests and responses between the RP and the VA and 6.1 Risk, Liability and Agreements between the VA and CAs. Authentication of the RPs A VA must take on responsibility and liability with (and the VA towards the CAs) is done either when the respect to its services. One reason for using a trusted SSL channel is established or through signatures on third party service is risk management and risk requests. reduction on the RP side. The VA should ideally The RPs may be authenticated by certificates issued provide a one-stop shopping service, where all relevant by their preferred CA. The VA’s own certificates can liability related to certificate validation is taken on by either be obtained from one or several CAs (may be the VA. The VA should then be able to transfer needed to authenticate towards CAs), or the VA may liability to the CAs (or other information providers) if authenticate by a self-signed certificate to pinpoint its an erroneous answer from the VA is caused by position as an independent trust anchor. erroneous information from such actors. The VA’s 5.8 Privacy and Identity Management liability must be clearly stated and accepted in the VA’s agreement with the RP, and the cost to an RP Miscellaneous scenarios can be used to illustrate may depend on the level of risk that the VA takes. potential relationships between a VA and identity Thus, the RP faces a clear risk picture and is provided management services. A VA may take on the role of an with some risk reduction. However, a VA will Identity Provider according to the Liberty Alliance definitely limit its liability. framework. In this case, the XML document produced A VA is an on-line service, and there is a clear risk as a response to a request will be a SAML V2.0 token that this will constitute a single point of failure for the including certificate information and auxiliary RP. Unavailability of the VA will disable use of information. A VA may also be placed “behind” an certificates for all RPs affected by the situation. This Identity Provider, enabling the Identity Provider to situation must be covered by service level agreements outsource certificate processing. Even in this case a between the RPs and the VA. Additionally, the VA SAML V2.0 token may be the appropriate answer from actor must ensure a service with very high availability, the VA. as discussed in 5.6. The VA must reliably log all actions performed, An RP must also evaluate the risks related to since the VA must be prepared to supply evidence in continuation of the VA’s service offering, such as case of disputes. Disputes need not involve the VA bankruptcy of the actor behind the VA. A competitive itself; an RP involved in a dispute with a customer may environment should exist for VAs (see 6.2 below), and consult the VA for evidence. The log information will interfaces should be published and openly available to include information on all certificate validations with ensure that an RP is able to change to another VA. identification of certificate, RP and time. Thus, a VA Change from a VA model to a non-VA model (based by necessity obtains personal information. on trust structures such as bridge-CAs) may however The privacy issues for a VA are rather similar to require more work on the RP side. The agreement those faced by an Identity Provider. A VA does not in between an RP and a VA should ensure that logs and itself provide identity federation and therefore has no other material of potential evidential value can be user consent procedures. It is clear that a VA will in transferred to the RP if the agreement is terminated. principle be able to track use of certificates across all The jurisdiction for an agreement between an RP RPs that the VA handles. However, the VA has no and the VA will preferably be determined by the VA, need for this information since its customers are the but an RP may demand an agreement according to its individual RPs. The only practical purpose of tracking own legal environment when the VA and the RP are in use of a particular certificate may be to trace misuse of different jurisdictions (e.g. different countries). the certificate across RPs. Consequently, this functio- A VA will on the other hand in most cases need nality may be disabled. agreements with the CAs (and other information providers). Relying on general statements in a CA’s control, and will require authentication. At present, policy will be too risky. An agreement will in most payment also requires authentication. cases be according to the CA’s jurisdiction since the CAs are off-line services. A CA might prefer a low agreement resembles a relying party agreement with price for issuing of certificates combined with a fee for respect to the CA. use of certificates, where this fee is collected from the Note that such an agreement additionally provides RPs. Pay for use is only possible for on-line services, risk management for the CA. As one example, the EU which for a CA are revocation checking and directory Directive on electronic signatures [7] mandates in services. If revocation checking is based on CRLs, an principle unlimited liability for a CA issuing qualified RP will typically download CRLs periodically to a certificates. Today, the only way for such a CA to cache and perform further revocation checking from control liability is to require agreements with all RPs. the cache. If the RP instead uses a VA, the VA may With a VA, the chain of agreements from a CA to a provide per use billing even for CAs that only provide VA and on to the RPs may be used to limit liability. CRLs. Thus, a VA should aim at a situation where all An RP should need to trust and have a contract with relationships between actors are covered by agree- only one VA. A competitive market exists for ments, providing a clear risk picture. certificates (CA services), and correspondingly a A VA is not an issuer of certificates and thus can competitive market should exist for VA services. assess the validity and quality of a certificate, but not Competition should be based on cost and quality of the correctness of a certificate’s content. The VA can service (QoS). In addition to customary QoS para- take on liability for certificate content, but only if this meters like response time and availability, QoS liability can be transferred to the appropriate CA. elements for a VA may be e.g. the number of CAs Operation of a VA as described in this paper may handled, responsibility/liability taken on by the VA, depend on changes in national legislation. As one the classification scheme used, possibilities for example, the German legislation [2] requires a foreign auxiliary information, and the interface(s) offered. CA to cross-certify with a German CA in order to have Competition is limited if interfaces offered by a VA its qualified certificates accepted in Germany. The are closed and proprietary, necessitating a “deep Regulatory Authority for Telecommunications and integration” with systems at the RP. We suggest use of Post must approve the cross-certification. This is an Web Services with published XML specifications to unfortunate implementation of the paradigm that only a interface a VA (see 5.6). CA may be a trusted actor in PKI. However, an inter- pretation where a VA may take the CA’s role, and the 7. Conclusions requirement for a cross-certificate as mechanism is relaxed, will solve the situation. An alternative approach at PKI interoperability is suggested, where interoperability is offered by means 6.2 Customers, Payment, Competition of an independent, trusted Validation Authority (VA). The liability that the VA takes on, and the operational The trust model for the PKI Relying Party (RP) is costs of a VA, must be balanced by an income if the revised, and the RP takes direct trust in the VA, not VA shall be able to make a profit out of the service. A CAs. The RP is then able to trust all CAs that the VA VA provides on-line services. The RP will pay for the handles. The VA handles all CAs individually, thus VA services according to the business model agreed eliminating the need for trust structures among CAs (transaction based, volume based or fixed), and the VA and the resulting certificate path discovery and in turn may pay CAs and other information providers validation procedures. according to agreements. A VA must be offered by an actor independent from PKI interoperability problems are faced by service the CAs. The VA should provide to an RP: Status on providers (government and business), requiring PKI- validity of certificate, quality classification of the based authentication and signatures from the certificate, and a clear picture of the liability issues. A customers, and by businesses for (signed) B2B VA must take on liability for its actions, thus providing communication. However, VA services to the general risk reduction for the RPs. A commercial VA must public, e.g. to verify signed email no matter the CA of provide enough added value to its customers to be able the sender, is also interesting. It is recognised that to to cover liability and expenses and run a profitable the general public, anonymous access is beneficial, but business. The main achievement to an RP in addition note that most auxiliary information that can be to risk reduction is one-stop shopping (agreement, returned from a VA need to be subject to access billing, complaining, trust, liability) for acceptance of certificates. The VA scheme is based on agreements, between 17. Myers M., Ankney R., Malpani A., Galperin S., Adams the VA and the RPs on one hand and the VA and CAs C.: X.509 Internet Public Key Infrastructure Online on the other hand. Thus, unlike other approaches to Certificate Status Protocol – OCSP. RFC2560 (1999) PKI interoperability, the RP obtains an agreement for 18. NIST: Public Key Interoperability Test Suite (PKITS) Certification Path Validation. (2004) acceptance of certificates from any CA. 19. NIST: Path Discovery Test Suite Draft Version 0.1.1. (2005) References 20. OASIS: Understanding Certification Path Construction. White Paper from PKI Forum Technical Group (2002) 1. British Standards Institute: Specification for Information 21. Pinkas D., Housley R.: Delegated Path Validation and Security Management Systems. British Standard BS Delegated Path Discovery Protocol Requirements. 7799-2:2002 (2002) RFC3379 (2002) 2. Bundesnetzagentur: Ordinance on Electronic Signatures. 22. TeleTrusT Deutschland e.V.: Bridge-CA Certificate (2001) Practice Statement (CPS) (2002) 3. Chokani S., Ford W., Sabett R., Merrill C., Wu S.: 23. Ølnes J.: DNV VA White Paper: PKI Interoperability by Internet X.509 Public Key Infrastructure Certificate an Independent, Trusted Validation Authority. DNV Policy and Certification Practices Framework. RFC3647 Report 2005-0673 (2005) (2003) 4. Commission of the European Communities: Action Plan for the Implementation of the Legal Framework for Electronic Public Procurement. Communication from the Commission to the Council, the European Parliament, the European Economic and Social Committee and the Committee of the Regions (2004) 5. ETSI: Policy Requirements for Certification Authorities Issuing Qualified Certificates. ETSI TS 101 456 v1.2.1 (2002) 6. ETSI: Policy Requirements for Certification Authorities Issuing Public Key Certificates. ETSI TS 102 042 v1.1.1 (2002) 7. EU: Community Framework for Electronic Signatures. Directive 1999/93/EC of the European Parliament and of the Council (1999) 8. Federal PKI Policy Authority (FPKIPA): US Government Public Key Infrastruture: Cross- Certification Criteria and Methodology Version 1.3. (2006) 9. Federal PKI Policy Authority (FPKIPA): X.509 Certificate Policy for the Federal Bridge Certification Authority (FBCA) Version 2.1. (2006) 10. Hallam-Baker P., Mysore S.H. (eds.): XML Key Management Specification (XKMS 2.0). W3C Recommendation. (2005) 11. IDA: A Bridge CA for Europe’s Public Administrations – Feasibility Study. European Commission – Enterprise DG, PKICUG project final report (2002) 12. ISO: Evaluation Criteria for IT Security. ISO 15408 Parts 1-3 (1999) 13. ISO/IEC: Information Security Management – Code of Practice for Information Security Management. ISO/IEC 17799 (2000) 14. ITU-T | ISO/IEC: OSI – the Directory: Authentication Framework. ITU-T X.509 | ISO/IEC 9594-8 (1997) 15. Malpani A.: Bridge Validation Authority. ValiCert White Paper. (2001) 16. McBee F., Ingle M.: Meeting the Need for a Global Identity Management System in the Life Sciences Industry – White Paper. SAFE BioPharma Association. (2005) PKI Interoperability by an Independent, Trusted Validation Authority 5th Annual PKI R&D Workshop NIST, Gaithersburg, Maryland, USA Jon Ølnes, DNV Research, Norway 04.04.2006 DNV – an independent foundation  Objective: To “Safeguard life, property, and the environment”  Established in 1864 in Norway (purpose: independent assessment of quality of ships to aid insurance tasks) 1 August 2017 Slide 2 DNV worldwide > 6000 employees, about 300 offices in about 100 countries 1 August 2017 Slide 3 DNV and digital value chains  DNV has existed as an independent, trusted party for 140 years - Ship and process industry classification and certification - Certification to ISO 9000, ISO 14000, BS 7799 etc.  Carry on this position to new areas - Digital value chains / processes between actors - Which trusted roles are needed for such processes? - Which roles may be of interest for DNV to take? - ”Safeguarding life, property, and the environment” applied on digital value chains - PKI and digital signatures are key elements in securing such processes 1 August 2017 Slide 4 DNV’s own PKI requirements (example)  Reshaping own business processes (e-processes)  Strong need for signatures, e.g. issuing ship certificates  Role as PKI Relying Party, e.g. receiving documentation from actors Global PKI interoperability is built in Finland to DNV Class required for these e-processes equipment from Germany Signed documents must be steel from South Korea verified by other parties than USA based ship owner those involved in the signing Bahamas registered process Insured in UK, calls port in Singapore, …. 1 August 2017 Slide 5 Example: e-Procurement in EU public sector  “Directives oblige any public purchaser in the EU to effectively recognize, receive and process tenders submitted, if required, with a qualified signature and their accompanying certificates, regardless of their origin within the EU or their technical characteristics”  “The existing significant differences between qualified signatures …. should therefore be reason for great concern. The interoperability problems detected despite the existence of standards …. pose a real and possibly persistent obstacle to cross-border e-procurement.” 1 August 2017 Slide 6 Need for PKI interoperability = eService providers 1 August 2017 Slide 7 The challenges to the Relying Party 1. Is the certificate valid? - Check the CA’s signature - Verify content - Verify timestamps - Verify that the certificate is not revoked 2. Is the quality of the certificate sufficient for the purpose at hand? - Legal status (qualified etc.)? - Quality as described by certificate policy and other documents? - Compliance with claimed quality level? 3. Shall I trust the CA? - High quality, but it is located in Iraq … 4. What happens if anything goes wrong? - What liability does the CA take on? - What recourse do I have to claim this liability?  An RP needs different trusted services than a Certificate Holder 1 August 2017 Slide 8 What about trust structures?  Start with your own CA to obtain a trusted copy of remote CA’s public key  May indicate quality (policy mapping, hierarchy base policies)  Revocation checking must still be done towards remote CA - May be a software integration and efficiency problem  Liability still resides with remote CA - Check the CA’s policy  Path processing (especially discovery) can be very complex 1 August 2017 Slide 9 Risk management requires agreements  Relying on general statements in policies is too risky - Written in Russian, referring to Russian law …  An RP cannot enter agreements with all CAs - Cannot by itself judge quality and liability - Unknown risk situation  A CA cannot have agreements with all possible RPs - Europe: In principle unlimited liability for issuers of qualified certificates - Unknown risk situation for the CAs 1 August 2017 Slide 10 DNV’s approach – the VA VA service = eService providers 1 August 2017 Slide 11 “One stop shopping” for Relying Parties  The VA is an independent trust anchor, trust is not delegated from the CAs - Challenges the PKI axiom that only a CA may be a trust anchor - The VA handles each CA individually - Must be independent from any CA – treat all CAs on equal terms - Eliminates need for certificate path discovery and validation  One agreement for processing of certificates, irrespective of origin - One point of contact and billing  Proper management of risk and liability - Removal of complexity - Classification and assurance of quality - Acceptance of liability (agreement RP/VA) - Transfer of liability (agreements VA/CAs)  One software integration - Web Service interface proposed for the VA service  Scalability - Acceptance of new customers, with certificates from “new” CAs 1 August 2017 Slide 12 The business case – win * 4 1. The Relying Party - One stop shopping and proper risk management 2. The Certificate Holder - Possibly better reuse of the certificate 3. The Certificate Authority - Better reuse of certificates – more relying parties - Agreements with RPs through VA – improved risk management - The VA is not visible and shall not jeopardise CAs’ business models - CAs tend to react positively to the idea of a VA … 4. The Validation Authority - On-line services that customers are willing to pay for(?)  There should be a competitive market for VA services - Open specifications, in the end preferably standardised 1 August 2017 Slide 13 Authentication is not trust  A certificate provides - Authentication – knowing “with certainty” the name of the counterpart - “Proof” of this authentication - Mechanisms for secure communication  This is not sufficient to trust the counterpart - Knowing the name of the crook does not make him honest  Naming is an issue - Does the name in the certificate make sense to the relying party? - Or can it be translated into a meaningful name? - A VA service can provide (or support) identity management services  Trading between unknown parties requires other trust anchors - Notary services, brokers, marketplaces, trusted semantic web etc. 1 August 2017 Slide 14 VA services Interoperability for e-business Business processes Reference data Notary services Archiving Signature maintenance Format maintenance Time stamping Interoperability services Signature verification Classification Auxiliary information Certificate validation 1 August 2017 Slide 15 Classification (ongoing development work)  Objective criteria for certificate classification must be derived - Base on existing work (FBCA, EU qualified level, ETSI, ABA, research etc.)  A CA is classed based on policy and other documentation (CPS etc.) - May include other information on CA and owners (customer base, credit rating, income versus expenses etc.) - Classification for a VA may be less stringent than policy mapping for x-cert  Level of compliance must be assessed - Study of documents, self-assessment, surveillance, third party audit report, certifications etc.  Indicate quality as numerical value or profile (structure)  VA matches customer (RP) requirements with CA quality  Criteria may be turned into standards - And be used as basis for third party certification (DNV business area) 1 August 2017 Slide 16 VA services architecture Interface to CAs CA info db and info providers Certificate OCSP client validation engine towards CAs Web Service CRL pre-fetch LDAP or other Cert revocation SOAP component component interface to CA status cache Signature Certificate verification service directory access LDAP or WS Auxiliary Auxiliary client to info info service info db Interface to RPs provider 1 August 2017 Slide 17 Some implementation issues  Interface/integration towards relying parties - Web Services / SOAP preferred - Based on the XKISS part of XKMS - Security and authentication by SSL, and/or XML-DSIG and XML-Encryption  Interface towards CAs (and other information providers) - CRL pre-fetch to VA preferred – polling, not only on schedules - OCSP client towards CA must be supported - LDAP or other to fetch certificates when only reference given  Information stored locally - Enables historical validation, according to time-stamp parameter in request or time-stamp in old, signed document - For audit purposes and to prove reason for answers  DNV’s development partner is Ascertia Ltd. (UK and Pakistan) - http://www.ascertia.com 1 August 2017 Slide 18 Prerequisites and challenges  PKIs must be sufficiently ”open” - Some PKIs require each relying party to install particular software - The CAs’ business models must support a VA service  Privacy - Do not track use of certificates across RPs! - Sufficient security of logs and other information  VA services and relying party preferences - A VA service may be ”one size fits all” (base validation policy issued by VA) - Or configured to the needs of the individual VA customer - E.g. specify particular rules for CAs that shall/shall not be trusted - Customer specific validation policies  Availability of the VA (single point of failure) - Distributed architecture needed - Replication for performance and availability - Localisation “close” to customers may be required  Legal challenges in some countries? 1 August 2017 Slide 19 Conclusions  VA services proposed as approach to PKI interoperability - Reuse of certificates - Agreements-based model - No path processing  VA as trust anchor for the RP - One contract partner and one integration - VA answers for “any” CA  Separate trust anchors for CH and RP may be a better trust model First version of DNV’s VA service available for pilot customers summer 2006 1 August 2017 Slide 20 Thank you for your attention! Jon.Olnes@dnv.com +47 47846094 1 August 2017 Slide 21 Using PDFs to Exchange Signed, Encrypted Data Ron DiNapoli Cornell University, CIT/ATA 5th Annual PKI R&D Workshop Who Am I? ♦ Worked with Kerberos/Central Authentication 1999-2004 at Cornell. ♦ Have attended various PKI related events since 2000 (CREN, NIST, Dartmouth). ♦ Began working for a small group at Cornell looking at advanced technologies in 2005. ♦ Looking at PKI usability/feasibility with respect to the Cornell environment since April 2005. Agenda ♦ Apologize to those expecting answers - My goal is to raise a question ♦ What problem am I trying to address? ♦ Make some assumptions about problem ♦ Ask some questions about problem ♦ Test premise that there are no stupid questions ♦ Q&? PROBLEM: A Recurring Theme ♦ User Experience with PKI is Bad! - Why Johnny Can’t Encrypt (1999) - Alma Whitten’s talk on custom mail client at 2nd annual PKI R&D Workshop (2003) - Dartmouth Summit: User Experience big reason for lack of deployment (2004) - PKI ’05 User Experience BOF What is the Solution? ♦ Could it be as simple as “Fluffy”? - Does PKI need a mascot? :-) ♦ Seriously... ♦ Early 90s: Kerberos had KClient - Common end user interface - Made Kerberos easier to use on more platforms for more people ♦ Can we learn from the past? Where is My Focus? ♦ Focus on “commodity” uses where we might expect a large number of “novice” users to need to understand PKI - Web Authentication - Signed/Encrypted Email - OS Level Login/Access - Custom (in-house) applications Analyzing the Problem ♦ Apologies to mathematicians... “End User Experience Support Expression” • (e + w + 2) * p – e: # of email clients (with PKI support) – w: # of web browsers (with PKI support) – p: number of operating systems (platforms) – “2” for 1 OS Level Login experience and 1 experience for custom applications How Do We Deal with this Problem? ♦ Start by sorting the uses into two “everyday experiences” ♦ Authentication - OS Level Login, Web Authentication, Custom Applications ♦ Encryption/Verification - Signed/Encrypted email, Custom Applications A Possible Solution ♦ Authentication and Encryption/Verification uses are (clearly) different experiences ♦ Can we unify these experiences across applications on each supported platform? - One authentication experience per OS - One encryption/verification experience per OS Benefits of Unification ♦ Remember the expression: (e + w + 2) * p ♦ With Unification, this becomes: 2*p • One authentication experience • One encryption/verification experience • Multiplied by the number of supported platforms Unified Authentication? ♦ Can the authentication experience be unified on each platform? - Not perfect, but examples of consolidation of PKI related operations at the OS level: – Windows--CAPI – Mac OS X--Keychain/Certificate Services – UNIX/Linux--M.U.S.C.L.E? ♦ But since this is a digital signatures panel we’ll focus on... Unified Encryption/ Verification? ♦ More problems here... ♦ Different experiences across applications on the same platform • Eudora/Outlook/Mail.app/Thunderbird do it differently • Safari/Firefox/IE • Custom applications Examples of Some Client Differences •Apple Mail’s verification of sender’s signature •Adobe’s visual indicator of a document whose signature has been verified •Thunderbird’s (Windows) user interface for encrypting a mail message • Apple Mail’s interface for encrypting a mail message • Thunderbird’s (Mac OS X) user interface for encrypting a mail message • Outlook Express’ interface for encrypting a mail message Thunderbird’s UI element indicating that the sender’s signature has been verified Can you match the picture to the explanation? Can PDFs Help with Unification? ♦ Let’s look at in the context of encryption/verification... ♦ PDFs can be signed/encrypted/verified ♦ Infrastructure is already deployed to majority of end user systems ♦ UI elements are reasonably the same on all platforms ♦ End users are likely already familiar with PDF/reader technology ♦ Can PDFs be used for all of our encryption/verification needs? If it could... Can PDFs Help with Unification? ♦ Since PDF technology is reasonably the same across platforms, our “unified” expression: 2*p ♦ Actually becomes: p+1 • Where “p” is the number of os-specific Authentication experiences we need to educate users on and the “1” represents educating users on PDF technology. • Much better than (e + w + 2) * p But Would it Work? ♦ Back to the million dollar question... ♦ Can PDF technology replace existing encryption/verification technology in commercial and custom applications? Two Types of Data ♦ Visual or Static Equivalent to the concept of sending “paper” to each other ♦ “Live” or Dynamic Equivalent to the notion of sending “files” to each other Recipient may wish to modify and send to someone else Signing/Encrypting Visual/Static Data ♦ This works today ♦ Use any PKCS#11 token ♦ Use a certificate in software store ♦ You can encrypt based on a user defined password or the Adobe Policy Server Policy Server gives you more control over who can see the data and what they can do with it Signing/Encrypting Live/Dynamic Data ♦ Some support in Acrobat/Reader - Form data in PDFs ♦ Less elegant solutions - Attach files directly to PDF container - Adobe’s PDF To Text Conversion (web site) - Search the Internet: “Convert From PDF” - In each case: Lose signing history! Conceptually, Where Does This Work? ♦ This concept works for applications such as: - Web Browser file level uploads/downloads - Mail Clients – Just need to be able to handle attachments – Great given the lack of unified user experience for S/MIME - Any other application that assumes data to transfer is in a dedicated file Conceptually, Where Doesn’t This Work? ♦ Applications which do not use files to transfer data. ♦ Can PDF technology be built into custom applications such that separate files are not needed? - Not really – Adobe has an “SDK”, but assumes Java/Servlet/HTTP app – No way to access hardware token on local machine – Still file based Demonstration Signing a PDF (Hardware Token) So, Does it Work? ♦ Based on issues with dynamic data, it appears to fall short. ♦ Is there hope for tomorrow? - Technology is already deployed - Adobe appears to be open to suggestions! - Minimally: Is this concept a good blueprint for the “real” solution? Demonstration Encrypting a PDF (Policy Server) Q&? Any Questions? Signing form data on the Web Presented on NIST’s PKI Workshop 5:th of April, 2006 Anders Rundgren Principal Engineer RSA Security arundgren@rsasecurity.com, + 46 709 41 48 02 Disclaimer: This paper only represents the author’s own opinions and should not be taken as a statement by RSA Security V1.0, Anders Rundgren, RSA Security 1 Signing form data on the web - Why and how? Legal requirements for digital signatures in many e-government and e-health applications + The web has proved to be the media of choice for mass market IT solutions    “WebSigning” is already used by millions of consumers for on-line banking and e-government services in the EU SAFE, a recent BioPharma authentication initiative, is also targeting WebSigning as a primary delivery mechanism V1.0, Anders Rundgren, RSA Security 2 Message to Government How do you send signed and encrypted * messages to a government agency? *) It is rather confidentiality that is wanted. This can be achieved through message encryption, but also through transport (channel) encryption. V1.0, Anders Rundgren, RSA Security 3 Message to Government Using “WebSigning” By using https (for achieving confidentiality) and web signatures, it becomes comparatively easy to create secure, form-based applications. The minute application shown above, is a basic version of a typical citizen-to-government (C2G), “data input” application on the web. The right-most display shows a web-signature dialog box, where the consolidated and typically “frozen” message data can be reviewed, before signing and submission. “WebSigning” (when standardized and built-in), offers full user mobility since it does not require any additional locally installed software, here assuming that smart card drivers and similar are in place. V1.0, Anders Rundgren, RSA Security 4 Message to Government Using e-mail and S/MIME N/A (more or less...) • The user have to understand and activate the security (policy) • Few S/MIME PKIs support the concept of a “department” or an “organization” only * • No easy way of retrieving encryption keys, have rather made PGP the most widely used e-mail encryption scheme *) Due to this discrepancy between typical PKIs and the actual organization structure, the sender must know in advance who is actually going to process a message. This may not always be the case and is also not entirely logical either since there is typically more than one person in a department that processes incoming messages and tasks. In addition, if the designated individual is on vacation or similar, the message will be left unprocessed V1.0, Anders Rundgren, RSA Security 5 Structured Messaging How do you create, secure and send structured * messages? *) Structured messages in this presentation, denote messages that are intended for consumption by computer systems, rather than by humans V1.0, Anders Rundgren, RSA Security 6 Structured Messaging Using e-mail and S/MIME To create and validate complex XML messages in a stand-alone mode, is hard for users, in addition to being highly error-prone. V1.0, Anders Rundgren, RSA Security 7 Structured Messaging Using “WebSigning” “Guidance” “Missing” (added by backend) Internal only Internal + External The screen dump above shows the final display of a session where a purchaser has put goods into a virtual “shopping cart” utilizing standard web techniques. Using the web makes it possible not only to specify simple products, but to conveniently configure arbitrarily complex items such as computers and airline tickets. Note that the purchaser simultaneously signs some information that only is intended for internal use (Cost center), as well as information intended for both internal and external consumption (Order data). That buying organization, date and order number seem to be missing, is because these items are preferably added by backend processes. Order numbers are typically not created until orders are ready for transmission to suppliers. Order requests like above, may need further authorization by managers, who can also dismiss requests. Note that user signatures stay within the information system boundaries (as proofs of action), since outgoing purchase orders are, when fully authorized, created and secured by the purchasing system, not by end-users. This architectural principle is a de-facto standard for many types of business and information systems, including the payment networks used by the financial industry, rather than being limited to purchasing systems. V1.0, Anders Rundgren, RSA Security 8 Structured Using “WebSigning”, continued Messaging Information system, typically based on Web Outgoing messages*, using a Servers, SQL data-bases community-specific message and “Business Logic” format (e.g. XML, EDI, ASCII), transport, and security solution Validation + Archiving Introduced by WebSigning User-oriented data and presentation Signature returned 1. When the user indicates he or she is ready, the information system format (e.g. HTML) by the WebSigner generates a signature request in a format that invokes the WebSigner. (e.g. XML DSig) 2. When the user has completed the signature process, the WebSigner Invocation returns a matching signature to the requesting information system. 3. After signature validation and archival, the information system may create an outgoing message based on the data associated with the signature. This data is typically kept in an internal format during the web session. Before transmitting an external message, it is secured using a community-specific method. * Although highly interesting, how possible outgoing messages are secured is generally out-of-scope for web- signing schemes. Note though that there are use-cases, particularly in the government-to-government (G2G) space, where user signatures may indeed need to be exchanged between different parties. Such uses include citizen permit applications which involve more than one government agency. In this case, a citizen signature and associated document data, would typically only be a “payload” of an embedding message, holding agency-related data associated with the permit application. V1.0, Anders Rundgren, RSA Security 9 Structured Messaging The Alternative – “Fat” Clients The upside + Highest possible functionality and performance + For frequently used applications more or less a necessity The somewhat darker side of fat clients… - Often 3-10 times more expensive to develop, deploy and support than web solutions - Hundreds of unique clients needed in a large enterprise - Inflexible and static - Usually highly platform dependent  Not applicable in a C2G-environment V1.0, Anders Rundgren, RSA Security 10 Signature Validation How do you validate and represent a signed message for a user? V1.0, Anders Rundgren, RSA Security 11 Signature Validation Using e-mail and S/MIME A prerequisite for performing signature validation is that trust anchors are available. The S/MIME way of communicating, implicitly creates a huge number of CAs, which makes trust anchor management less straightforward except within a “community” like provided by the US Federal PKI. Old signatures with expired certificates, also create difficulties for users. Another hurdle is that the financial sector have on some markets, begun to issue certificates requiring the verifier to have a contract and licensed validation software, which is incompatible with end-user based e-mail. Currently, few ordinary users understand how to deal with PKI and trust anchor management. V1.0, Anders Rundgren, RSA Security 12 Signature Validation Using “WebSigning” Using WebSigning, a service provider performs validation once, preferably immediately after receival of the signed message. How much signature information a service provider makes available for end-users vary, but is typically limited to a mark of some kind. The information system centric approach to signature validation, enables a service provider to unilaterally set policy rather than pushing down policy and trust decisions on their users. This scheme also facilitates highest possible mobility, since a user only has to carry around his/her own certificates. V1.0, Anders Rundgren, RSA Security 13 Problem: Current WebSigning solutions are both proprietary, non-interoperable, and all-over-the map Basic technology choices include: • ActiveX plugins for MSIE • Platform independent Java applets • Platform dependent Java applets • Local signing web proxies Summary: There are numerous reasons for a standardization effort… V1.0, Anders Rundgren, RSA Security 14 The WASP (Web Activated Signature Protocol) standards proposal • Operating system independence. WASP only relies on standard web technologies such as XML, MIME and X.509 • Device independence. WASP is designed to run on smartphones to workstations • Document format independence. Signs any browser-viewable media like TXT, HTML, JPEG, MS-Word, Adobe-PDF, etc., as well as attachments in arbitrary formats • Unified signature procedure. WASP unifies on-line signature procedures in the same way as is already the case for signed e-mail • Multiple signature formats. WASP supports XML DSig and ETSI’s XAdES (specifiable by the signature requester) • What you see is what you sign (WYSIWYS). In harmony with legal and user requirements • Thin client design. A browser distribution would be about 200K bigger in order to support WASP V1.0, Anders Rundgren, RSA Security 15 Digital Signature Usability Ravi Sandhu George Mason University and TriCipher © 2006 Ravi Sandhu www.list.gmu.edu Objectives • Emphasize usability not cryptography • But they are interrelated • All the same there are some purely usability issues on which we currently do a terrible job © 2006 Ravi Sandhu www.list.gmu.edu 2 Think outside the box • Cryptography alone cannot provide assurance of signatures. • It is necessary but not even close to being sufficient • Also need elements of “trusted computing” – founded on a strong hardware base for high assurance • The needs of transaction signatures are very different from those of document or email signatures • Transaction signatures rather than signed email may be the killer application • The biggest productivity gains are in volume of low-grade transactions not so much in automating really high end transactions • There is no such thing as an offline transaction • Transactions are typically verified by computers not by people © 2006 Ravi Sandhu www.list.gmu.edu 3 Questions (signer oriented) • Can users execute the signature procedure when appropriate? • Do they understand when it's appropriate? • Do they realize the consequences of their actions? • Can they recover if they accidentally make a mistake? • What clues are provided to guide them? • Do all signatures need to be of the same strength? • Who determines what the strength of a signature should be? © 2006 Ravi Sandhu www.list.gmu.edu 4 Questions (verifier oriented) • Is the verifier a human or a computer • Signed email: human verifier • Signed transaction: computer verifier with possibly human audit and recourse forensics • How do we deal with the revocation problem? • Should the verifier even be responsible for this problem? • Do I have responsibility for ensuring that the signer signed what I intended for the signer to sign? • Is there a notion of a verification chain? © 2006 Ravi Sandhu www.list.gmu.edu 5 PEI Models Framework Security and system goals Necessarily (requirements/objectives) Informal Formal/ Policy models quasi-formal Horizontal view System block Enforcement models diagrams, Looks at Protocol flows Individual layer Pseudo- Implementation models code Vertical View Looks Target platform, e.g., Trusted Actual Across Computing technology and PKI Code Layers © 2006 Ravi Sandhu www.list.gmu.edu 6 Achieving Email Security Usability Phillip Hallam-Baker Principal Scientist, VeriSign Inc. Abstract Despite the widespread perception that email security is of critical importance cryptographic email security is very seldom used. Numerous solutions to the problem of securing email have been developed and standardized but these have proved difficult to deploy and use. One of the main reasons for this difficulty is that each piece of the required technology has been developed independently as a generic platform on which security solutions may be built. As a consequence the user is left with an unacceptably complex configuration problem. This paper proposes a means of providing transparent email security without the need for additional configuration based on existing security standards (XKMS, S/MIME, PGP, PKIX) and the recent DKIM standards proposal. Although the client deployment mode is considered the same approach would be equally applicable to an edge security configuration. Possible extensions of the protocol allow support for document level security approaches and to resist attack by quantum cryptanalysis. gangs to steal credit card numbers and access The Usability Problem credentials by impersonating trusted brands. It is a truth universally acknowledged that an The demand for usable security is critical even in Internet user in possession of an email classified applications that have traditionally application must be in want of encryption. relied on sophisticated operating systems Despite the strong and nearly universal belief in designed to be secure at all costs1. cryptographic security within the information security field, users have proven exceptionally What is usability? reluctant to use the encryption features built into A secure application should require no more practically every major email program for close training and be no more difficult to use than an to a decade. insecure one. It is time for the security community to In order to realize these goals it is necessary to: recognize that the users do not reject cryptographic solutions out of ignorance. They • Employ consistent and familiar reject them because they are too difficult to use communication methods and often fail to meet their real security needs. • Eliminate all non-essential interaction The cost of public key infrastructure that • Communicate all essential security impedes deployment is mental rather than information financial. Users do want security. But they are not prepared to do their work any differently or While these goals may not prove to be sufficient learn any new tools to achieve this. Users it is clear that they are necessary and that current demand security that is completely seamless and email security implementations do not achieve transparent, built into the fabric of the Internet them. infrastructure. The need for ubiquitous Internet security has How current systems fail never been more apparent or more acute. Internet Instead of being presented with a solution that crime is now a professional business conducted provides security automatically and reliably the for profit. The twin engines of Internet crime are user is given a ‘self assembly kit’. spam and networks of compromised computers Once the user has selected a Certificate (botnets). The lack of a ubiquitous email Authority and enrolled for a digital certificate authentication infrastructure allows phishing S/MIME allows her to sign individual email messages or set a policy of signing all outbound In the mid 1990s a considerable effort went in to email. If there is a digital certificate available for ensuring that every major email client supported the recipient she may choose to send the message the S/MIME protocol. But even though this top- encrypted, or not. down ‘deployment’ was almost completely successful in making secure email available to For the average user this already represents a over a billion users it was entirely unsuccessful bewildering array of decisions but the user is still in persuading them to use it. far from having a fully functional email security solution. She has not yet configured her LDAP The bottom-up deployment strategy of PGP was directory or her SCVP interface. She has not only marginally more successful. PGP persuaded loaded her smartcard drivers. And after a significant minority within the technical completing all these tasks she will have to renew community to install and configure a security her certificate a year later when the original plug in. But even amongst this community expires. security is the exception, not the rule. Only a tiny number of PGP key holders use it every day. PGP suffers from similar usability problems, Neither protocol has succeeded in achieving notably described by Whitten and Tygar2 in ubiquitous use today, nor is there reason to 1999. Like most S/MIME interfaces the PGP 5.0 believe that this will change in the future. interface described in the paper is designed with the goal of allowing the user to use cryptography Metcalf’s law and its as if this was the end rather than merely the means. corollary Later versions of PGP, notably PGP Universal Metcalf’s law states that the value of a network have attempted to overcome the usability deficit. is proportional to the number of people it However this has been achieved by having reaches. Metcalf’s law is often quoted in the “declared peace in the certificate and message context of breathless pitches for ‘viral format debates”3 and essentially implementing marketing’ programs premised on the fact that every variant of every standard. As such PGP once a network has gained ‘critical mass’ its universal is agnostic on the critical question as to growth becomes self-sustaining. which software architecture is most likely to The unfortunate corollary to Metcalf’s law is the enable a ubiquitous Internet wide email security chicken and egg problem. The same process of infrastructure. positive feedback can cause a network that has Traditional PGP offers the non-technical user an not reached critical mass to quickly loose even more puzzling requirement. Before they can members. The Internet now has over a billion use their key they should get it signed by one or users and ‘critical mass’ for an application is preferably several other PGP users that they likely to be several tens of millions of active already know. users. Enterprise strength PKI systems allow network The problem of network effects is even more administrators to substantially mitigate this pain acute when a new network is in competition with for the enterprise user. The personal Internet user an established one. If an S/MIME signature is is left on their own. Their perception of their added to an email there is a small but significant security needs and thus their tolerance for risk that the receiver will not be able to read it. deployment pain is very substantially lower, yet Some email programs cannot process messages as the problem of phishing demonstrates in S/MIME format. Other programs can process personal Internet users have more than sufficient the message but display it to the user in a assets to be the target of professional Internet distinctly unhelpful fashion. An early version of criminals. Personal users may have less the Internet access software provided by one confidential information to be stolen but they major ISP displays a helpful message ‘warning’ have money that can be stolen and they are much the user that a signed email has been received. more likely to be tricked into parting with it. The installed base The deployment problem As we have seen the success of any new security "Philosophers have only interpreted the world in infrastructure depends in large measure on how it various ways, the point is to change it" – Karl interacts with the existing infrastructure. Marx In particular the development cycles for client The problem with this approach is that the needs applications are typically three years or morei4 of early adopter communities tend to be and at any given time at least half of the installed specialized. A solution that meets these needs base of applications is three years old or more. may not meet the needs of Internet users as a whole. Early adopter communities are also likely It is clearly desirable for a security proposal to be to be tolerant of usability problems that are show as compatible with the installed base as is stoppers for Internet users as a whole. possible. But it is unrealistic to expect that legacy systems will be as secure as those that are The problem of specialist needs is particularly updated. acute in the US government. In addition to being considerably larger and more complex than the It is important that a secure email protocol be largest corporation the US government has compatible with the legacy infrastructure but it is considerably more information to protect and a also important that expectations be realistic. It is greater need to keep it secure. The military alone essential for legacy users to be able to has over 1.4 million active duty personnel, 1.2 communicate and interact with secured systems. million reservists, a further 654,000 civilian It is neither essential nor realistic to expect a new employees and indirectly employs a similar security protocol to offer infallible protection for number of contractors5. In addition the user who does not have an up to date approximately two million retirees and family application or whose machine has been members receive benefits. In comparison Wal- compromised by a Trojan. Mart, the worlds largest corporate employer has Essential criteria 1.6 million employees6. • Provide acceptable security and Early adopter communities can also be usability when used with an aware unrepresentative of even their own needs. The client US government certainly has a need for a security infrastructure that allows confidential • Provide acceptable usability when used and classified information to be protected. But it with a non-aware client is not clear that these needs are met by an email Non-Essential criteria security protocol. A classified document should be encrypted whether it is stored on disk or • Provide protection against bug exploits traveling over the Internet. This requirement is in legacy applications or platforms. more appropriately met by document level • Provide protection when the user’s security systems being developed in the context machine has been compromised by a of Trustworthy Computing and Digital Rights Trojan. Management. It appears that S/MIME has failed to meet Early adopter community government needs by offering too little even as it The usual solution to this corollary is to identify has failed to achieve widespread deployment by a community of early adopters with an urgent requiring too much. need for an email security solution that meets a particular need within that community. Pain Point The early adopter generally targeted for this Deployment of new Internet infrastructure is approach is government, in particular the United expensive and time consuming. This expense is States Government. In the early days of the only likely to be met by a security protocol if it Internet the US government and government meets a critical pain point that is urgently felt at funded research institutions represented a clear the time it is being deployed. majority of Internet users. Unlike the ‘early adopter’ strategy which attempted to identify a subset of users for whom the proposal represents a ‘killer application’ in the ‘pain point’ strategy we attempt to identify i For example consider the release cycle of particular functionality that addresses an issue of Microsoft Windows for home use, major updates immediate and urgent concern for the occurring in 1995, 1998 and 2001[4] community of Internet users as a whole. The pain that is being felt most urgently on the suspensions and prison terms rather than being Internet today is caused by Internet crime, in prevented from speeding using a speed limiter. particular spam and phishing7. Even if every motorist was required to install a speed limiter this would only prevent one type of Bootstrap strategy traffic violation; it would still be necessary to use the deterrence approach to control reckless Addressing an urgent pain point is a necessary driving, driving under the influence of alcohol. requirement for achieving a critical mass of support. If we are not careful however we may The glue that holds social networks together is end up with a proposal that meets the accountability rather than control. Control based requirements for addressing the pain point and security systems are not applicable to the only those requirements. Instead of establishing principle security issues facing the Internet a ubiquitous and pervasive security infrastructure today: the problems of Internet crime, in for all email we will have only succeeded in particular spam and phishing. Nor should it be a meeting our current needs with no plan for surprise that the Internet security problems that extending the solution scope in the future. have not been solved today are the ones which the control approach is not suited for. The Future-proofing a solution is particularly problems for which it is suited have already been important in the context of Internet crime. solved. Professional Internet criminals seek the largest return for the least amount of effort. Phishing The accountability approach to information spam is not their first criminal tactic to exploit security is better suited to applications where the the lack of security in email and unless we have consequences of individual security failures are a comprehensive email security plan it is small but the aggregate consequences of many unlikely to be the last. small security failures are significant. Accountability not Accountability Approach • Authentication: Who should be held Control accountable? Since its beginnings the field information • Authorization: What the likelihood of security has been dominated by government compliance? needs and in particular academic perception of military needs. This has led to the development • Consequences for default of security systems designed to control access to information: As in the control approach the first two steps in the accountability triad are authentication and Control Approach authorization. The principle difference is that in the control approach authorization is the last step • Authentication: Who is making the in the process. The authorization decision is request? binary; access is either granted or withheld. • Authorization: Is the request permitted In the control approach there is a bias towards for this party? refusing access unless the criteria for granting it The control approach is based on the assumption are met. The Internet security problems that have that there is a clearly defined set of parties, a proved intractable using the control approach are clearly defined set of rules that are to be applied problems where the consequences of incorrectly and that both the rules and the parties to which granting access on a single occasion are small (a they are to be applied are known in advance. single spam is an annoyance) but the consequences of incorrectly granting access on a There is no set of rules that can be written in large number of occasions are severe (a thousand advance that will infallibly identify spam email spam messages a day is a crisis). without mistake yet it is easy to recognize spam when it is received. In the accountability approach there is a bias towards granting access, provided that we are Not only do these assumptions fail when applied confident that there will be significant to a public network, they also fail for a large consequences if the other party defaults. This is a number of real world situations. Motorists are much closer match to our typical ‘real world’ deterred from speeding through fines, license behavior than the principle of ‘do nothing until completely sure’ that characterizes the control domain name owner that take responsibility for approach. the email. The Internet has a billion users, attempting to hold each and every user The consequences of default may be loss of use, accountable for sending unwanted email is a civil actions or even criminal prosecution. What futile effort. Holding ISPs, Corporations, is important in the accountability approach is Schools and Universities accountable for that the perceived probability of the policing their own users is much more consequences being imposed and the promising. consequences themselves be sufficient to deter an unacceptable rate of default. In particular the DKIM architecture is designed to the assumption that messages are signed at the The Responsibility outbound email edge server of a network rather than by individual who sent it. On the receiving Problem side the design is optimized to meet the needs of Domain Keys Identified Mail (DKIM8) is an a signature verification filter at the incoming email authentication technology that allows an email edge server. In most cases this filter would email sender, forwarder or mailing list to claim be a part of a spam and virus filtering solution. responsibility for an email message. A party that The edge architecture of DKIM allows for rapid claims responsibility for an email message deployment as an organization can deploy DKIM informs the recipient that they can be held through an infrastructure upgrade limited to the accountable and thus may increase the email servers. probability that the intended recipient will accept it. DNS Key Distribution Although DKIM does not and cannot solve the DKIM is a highly focused proposal designed to spam problem directly, DKIM allows email solve the responsibility problem using minimal senders who volunteer to be held accountable to extensions to existing protocols and distinguish themselves from likely spammers. infrastructures. Instead of proposing deployment The spammers have a vast array of tactics but of a new Public Key Infrastructure for key each and every one is designed to avoid the distribution DKIM keys are distributed through spammer being held accountable. the DNS using unsigned public key values stored The DKIM message signature format allows a in a standard text record. signature to be added to an email message Using the DNS to provide the key distribution without requiring modification of the message mechanism allows any email sender to start body. This ensures that (unlike S/MIME or PGP) accepting responsibility for outbound email by the addition of a signature to an email does not signing it without requiring the sender to deploy negatively impact any recipient. Another any new infrastructure beyond adding the email significant departure from previous schemes is signature module to their outbound mail server that recipients are advised to treat a message and adding a small number of text records to carrying a signature that cannot be verified as if their DNS. it were unsigned. The disadvantage to this approach is that the key The DKIM sender signature policy record allows distribution mechanism is limited by the a domain name owner to explicitly deny architecture of DNS which is designed to responsibility for unsigned mail message by provide a fast response to contemporaneous stating that all authentic mail is signed. This requests. The DNS has no concept of history and makes it possible for an email recipient to there is no way to ask ‘what did this DNS record conclude that an unsigned message is likely to be look like two months ago’. While this is not a a forgery, a conclusion that is not possible with significant constraint when an email message is any of the previous cryptographic email security being validated in-transit (e.g. at the inbound proposals. email edge server) the DNS is not an ideal infrastructure for serving the key distribution Edge Architecture needs of an email client which might want to Unlike the traditional approaches that attempted verify a signature on an email opened hours, to identify the individual responsible for sending days or even months after it was originally sent. the email, DKIM is designed to identify a piece of information issued by the bank, every The Authenticity Problem letter, every credit card, every ATM is Traditional email security approaches consider consistently branded with the current logo. To confidentiality and integrity to be complimentary solve the authentication problem the same cues tasks that are equally important. This assumption must be applied to Internet communications. introduces a subtle bias into the architecture as it is assumed that senders and receivers must both Secure Internet Letterhead upgrade their email clients to exchange secure Secure Internet Letterhead is a proposal for a mail. comprehensive Internet authentication This assumption certainly holds for encrypted infrastructure that allows every trustworthy mail where a recipient must have the means to Internet communication to be securely marked decrypt the message in order to read it. But the by a trusted brand. assumption that a recipient must have the means The SSL padlock interface is designed to tell the to check the signature on a signed mail before user ‘if the padlock icon is present the domain reading it is a major departure from existing name component in the address bar can be practice. It has led to a situation where S/MIME trusted’. The Secure Internet Letterhead signatures cannot be used against the problem of approach is direct: ‘if the trusted brand logo phishing because of the minority of email readers appears in the secure area of the browser it can that are unable to present a signed message to the be trusted’. user in an acceptable fashion. For a user interface component to be trustworthy The problem of phishing highlights the need to it must always be trustworthy. DNS Domain consider authenticity separately from the Names and X.500 distinguished names were problem of integrity. It is much more important both designed to provide a directory function. that a recipient be able to identify the sender of Attempting to overload this function and in an email than know with certainty that the addition use them as a security indicata is content has not been modified in any respect doomed. Secure Internet Letterhead introduces a since. new indicata whose sole purpose is to provide a Traditional email security approaches have security indicata. attempted to identify the sender of an email by If the authentication mechanism is to be means of an X.500 distinguished name or an successful it must be applied consistently and RFC 822 email address. The second approach ubiquitously. In addition to its application to has proved more successful than the first but still email described in this paper work is underway allows email senders to be impersonated through to apply the same principles and underlying use of ‘cousin’ or ‘look-alike’ domains. DKIM technology to Web transactions (using SSL) and allows ‘AnyBank’ to prevent an attacker to Internet Messaging, telephony and Video. successfully impersonating anybank.com. DKIM does not prevent the attacker registering a similar Secure Internet Letterhead is a realization of the domain name such as any-bank.com or anybank- PKIX LogoType extension proposed by Stefan security.com. The introduction of Santesson et. al., expected to be accredited as an internationalized domain names9 provides IETF draft standard in the near future.10 The additional scope for this type of attack. PKIX LogoType extension allows a certificate issuer to embed links to one or more logos A phishing impersonation attack is directed at representing the brands of the certificate subject the weakest link in the security chain, the gap and/or issuer. between the computer screen and the user’s head. To close that gap the authenticity of the Linking a certificate record to a DKIM public message must be demonstrated using cues that key record11 allows the DKIM signature format are familiar to the user. A user cannot and should to be used as a vehicle for applying secure not be expected to recognize AnyBank by its letterhead. The brand of the message sender is Domain name any more than by its telephone only shown if the message signature verifies and number or ABA routing number. Customers the signature key is authenticated by an X.509v3 recognize businesses in the physical world by certificate carrying the corresponding LogoType their brands. Every large bank has a team of extension that is issued by a trusted certificate people whose sole job is ensuring that every issuer (Figure 1). Figure 1: DKIM Secure Letterhead The prototype implementation of Secure Internet Various control based mechanisms have been Letterhead was developed as a Web Mail proposed to ensure that Certificate Authorities interface. This approach was chosen to further carry out their duties accurately and effectively. the deployment strategy. If one or more of the Like all control based security approaches these principal providers of Web Mail services were to suffer from the weakness that they can only deploy Secure Internet Letterhead critical mass define minimum standards for compliance. would be achieved instantly. Even adoption by a Control based security does nothing to encourage single Web Mail provider would provide a the development of improved authentication compelling business case for Financial criteria above and beyond the minimum. Institutions targeted by phishing to obtain a The most appropriate way to ensure the Secure Letterhead certificate. trustworthiness of Certificate Authorities in an Qui Custodiet Custodes? accountability based security scheme is to apply accountability principles to the problem. The security of Secure Internet Letterhead is Displaying the issuer logo to the user, either critically dependent on the trustworthiness of the directly in the email message dialog or through a certificate issuers. If an attacker can persuade a ‘pop-up’ or ‘mouse-over’ window forces the Certificate Authority to issue them a certificate Certificate Authority to put its own brand on the with a logo that impersonates a trusted brand the line every time a certificate is issued (Figure 2). introduction of letterhead makes the phishing problem considerably worse. Figure 2 DKIM Secure Letterhead Issuer Logo While effective authentication processes and can expect a DNS record to still be available rigorous quality control can minimize the risk of minutes or hours after the message was sent. issuing a fraudulent certificate no amount of Demanding records to be available at an prior investigation can ensure that the Certificate indefinite time in the future represents a subject will not default at a future date. Even the significant change to the operational best known and trusted brand can be acquired by requirements of DNS. a company that is later discovered to be run by For signature validation in the client application crooks and swindlers. For secure Letterhead to to be viable, persistent credentials are required. be trustworthy as well as merely trusted it is DNS is not designed to provide a persistent essential for the Certificate Authority to support credential repository but other existing PKI rapid revocation of keys that are used protocols are. In particular XKMS13 was fraudulently. For example by supporting a real designed to provide a persistent store for PKI time certificate status protocol such as OCSP12. credentials that is entirely agnostic with respect to the architecture of the underlying PKI. Like Client Application the DKIM DNS based key distribution model, Validation XKMS realizes a key centric PKI model similar to the original Public Key Directory model The DKIM protocol combined with Secure proposed by Diffie and Hellman14. XKMS may Letterhead provides a robust solution to the also be used as a gateway to a traditional authentication problem for users of hosted Web certificate based PKI following the Kohnfelder Mail services. As previously discussed however, model15. DNS begins to show weaknesses as a key distribution infrastructure when signature The DKIM signature format allows additional verification is performed offline in the email key distribution mechanisms to be specified by client rather than during the transaction flow by means of an attribute. In a typical application the messaging infrastructure. A signature verifier both key distribution mechanisms would be supported. This allows in-transaction signature make an effort to configure it. XKMS supports verification filters to acquire keys quickly while automatic discovery of the local XKRSS ensuring that the needs of offline clients for a registration service using the DNS service persistent and dependable key distribution discovery (SRV) record16. infrastructure are both met. If the user’s email address is alice@example.com an XKMS aware client can Per User Signatures discover the DNS address of the local XKRSS Support for signature verification in the email service by requesting the SRV record client extends the scope of the DKIM signature _XKMS_XKRSS_SOAP_HTTP._tcp.example.c to the receiving end of the communication. It is om. Once the XKRSS service is located the logical to look for ways in which the scope of the email client can register keys for any purpose security context can be extended to the sending they are required for: signature, encryption or end of the communication, allowing the key exchange. individual email sender to sign their The development of a prototype implementation correspondence with their own individual key. revealed a minor shortcoming in this aspect of Even though support for ‘per-user’ keying is the XKMS design. The only way that the XKMS outside the scope of the initial DKIM charter the client can discover the features supported by the base specification provides all the mechanism XKMS service is to attempt each one in turn. A necessary to sign messages with individual user richer service description language would allow keys and to use them for message validation. the XKMS service to tell the client which services are available. What the base DKIM specification lacks is support for management of the private key lifecycle. This is not a major concern for Encryption deployment at the edge. Even a large enterprise DKIM, X.509 certificates and XKMS provide all is unlikely to need more than a ten or a hundred the support necessary to support a domain keys. With ‘per user’ keying even a comprehensive yet completely user friendly moderately sized enterprise may quickly find email authentication mechanism. Adding support that it is managing hundreds, thousands or even for encryption completes the requirements for hundreds of thousands of keys. Domain names secure email as they are traditionally understood. tend to be relatively stable but students, Instead of proposing yet another email message employees and customers come and go. Unless encryption format however we observe that the the secure email client application provides existing S/MIME17 and PGP18 message formats support for key lifecycle management per user- provide almost everything that is needed. While keying quickly becomes unmanageable. either message format would meet the technical Key Lifecycle management requirements support for both formats is required to meet the political constraints created by the with XKRSS S/MIME vs. PGP standards war. To date this Fortunately XKMS also provides for key struggle has reached a stalemate, S/MIME lifecycle management. The XML Key dominates deployment but PGP dominates in Registration Service Specification (XKRSS) mindshare. The quickest way to resolve this component of XKMS is designed to support stalemate is to declare both formats winners and registration, reissue, revocation and recovery of move on. private keys. Problems An XKRSS client may be written from scratch in a few days if an XML parsing library is available Although the S/MIME and PGP message formats and open source toolkits are available for many are entirely sufficient both protocols have languages. significant usability defects that must be addressed if our deployment criteria are to be The Configuration Problem met. As the experience of S/MIME deployment demonstrates, support for a security feature is unlikely to be used if the end user is required to Key Distribution Security is End to End Only The principle defect in the most commonly used Although some effort has been made to introduce implementations of the traditional email an edge-to-edge model to both PGP and encryption formats is that both lack an effective S/MIME both specifications are essentially mechanism for key distribution. Given an email predicated on an end-to-end security model. address alice@example.com there is no simple This causes particular difficulty where process for locating the encryption key to use to encryption is concerned since many enterprises send email to that address. do not want to accept encrypted email messages XKMS, and two recent PKIX extensions, unless they are certain that they do not contain a PKIXREP19 and the proposed CERTStore20 virus or other form of executable code. Nor is extension solve this problem by allowing the end-to-end encryption likely to be acceptable to email sender to discover the location of the key end users if it renders spam filtering measures distribution service for the recipient using the inoperative. same SRV mechanism used to discover an Another source of difficulty with end to end XKMS registration service. encryption is the current trend towards receiving Once the key distribution mechanism is made email on a wide variety of portable and mobile automatic an email client can be configured to devices. It is not unlikely for a user to require automatically encrypt outgoing messages access to their email by means of a desktop, whenever an encryption key is available for the laptop and PDA. The end to end principle is also recipient. Email encryption becomes entirely inappropriate in the context of a Web mail seamless and automatic. service. The XKMS architecture allows the domain name Encryption is Message Body owner to control key distribution infrastructure Only for and hence the use of encryption in their domain. If the domain name owner wants to In S/MIME and PGP the SMTP encryption is ensure that encrypted email can be read by virus applied to the message body alone, the subject scanning or compliance systems at the incoming line is left unencrypted despite the fact that the edge server this can be achieved by returning the subject line is very likely to contain confidential public key of the edge server in response to key content. As a result the legitimate expectations of location requests. the user are not met. While this violates a core premise of the Solving this particular problem requires only the traditional email security protocols, that the end recognition that it is more important to meet the user should be empowered to control their own security expectations of the user. The solution security, domain names are inexpensive. The adopted in the prototype is to introduce a user who feels the need for ‘empowerment’ and confidentiality option into the email composition has the ability and inclination to control their window. If the confidentiality option is selected own security can readily do so by obtaining their the email client ensures that the entire message is own domain name. encrypted by moving the subject line into the message body and adding a new subject line After decryption at the email edge server the ‘Confidential’ or if applicable ‘Client message may be re-encrypted under the end- confidential – Attorney work product privilege user’s key. The resulting ‘encryption with a gap’ asserted’. need not mean a weaker security solution than the traditional end to end approach. For most If the confidentiality option is selected and it is enterprises the risk of trojan code bypassing their not possible to send the message encrypted the firewall and anti-virus filters is considerably user is warned. The user is given the option of greater than the risk of unintended disclosure of canceling the message sending the message confidential information. If a trojan is loose without encryption. The user might also be given inside the enterprise the security of the email the option of having the message printed out and system is moot in any case. sent by courier or sending the recipient a notice telling her to retrieve the message from a secured In cases where the ‘encryption gap’ is a concern, Web site. the process of decryption, scanning for active code and re-encryption could be performed by trustworthy hardware configured to refuse any Fortunately DNSSEC21 meets this objection for administrative interference. both XKMS and the DKIM DNS key distribution. The principal obstacle to DNSSEC Complex Trust deployment has been the lack of a compelling use case for the domain name owner. The Infrastructures professional Internet criminal attacks the The protocol profile described so far allows weakest, most profitable link in the chain. Until authentication and encryption capabilities to be the systemic security failures of email are added to an email application with a minimum of addressed the security shortcomings of the DNS code and without affecting usability. While these are practically irrelevant. Using the DNS as the capabilities are likely to be sufficient to meet the lynchpin of a ubiquitous cryptographic security security needs of most enterprises they do not system for email creates one of the strongest necessarily meet the needs of an enterprise which business cases imaginable. has already achieved a substantial deployment of a sophisticated PKI built on traditional Responding to change principles. As previously mentioned one of the most Fortunately XKMS provides an answer to these important tests of a security infrastructure is its cases as well. All that is necessary is for the ability to respond to changing needs. While it is email application that is attempting to locate an impossible to foresee every need a system that is encryption or signature key to delegate the task designed to meet the foreseeable needs is much to a local XKMS Validate service discovered more likely to meet unforeseen needs as well. using the same DNS SRV mechanism used to discover Locate and Registration services. Document Lifecycle Security During the development of the prototype a minor The next major step forward in Information bug was discovered in the XKMS specification security is likely to be a transition from transport which only defines a single SRV prefix for and message based protection to schemes that identifying an XKISS Locate or Validate service. protect the integrity and confidentiality of While these functions might be combined in a documents throughout their entire life cycle. single server the Locate service is primarily While an email message may contain sensitive concerned with servicing external requests and information an attached spreadsheet titled the Validate service is like the Registration ‘Accounts’ is almost certain to. service essentially an exclusive service for the Various schemes for ‘Digital Rights local domain. Management’ or ‘Content Management’ have It is therefore more likely that a Validate service been proposed but in practice most effectively would be combined with a Registration service end at the enterprise border. Without the ability than a Locate service. A simple solution to this to exchange the necessary key information oversight is to define a separate SRV prefix for across the open Internet it is not possible for the the Validate service: CFO to send a document to external counsel for review, a sales person to send confidential _VALIDATE_XKMS_XKRSS_SOAP_HTTP contract proposal to a customer or meet many similar real world business security needs. DNS Security Although the XKMS based key distribution A possible objection to the use of the DNS as a system and SRV discovery mechanism described key distribution or service discovery mechanism in this paper is applied to the PGP and S/MIME as described in this paper is that the security of encryption formats it could in principle be the key distribution infrastructure is ultimately extended to support DRM or CM encryption dependent on the security of the DNS, a protocol formats as well. Alternatively if this approach that does not currently have a deployed proved to be too constraining the same SRV cryptographic security infrastructure. While DNS discovery mechanism could be applied to a security has not proved to be a source of chronic SAML22 service publishing the appropriate security problems as email has it is clearly authorization assertions. unsatisfactory for the security of a cryptographic security protocol to rely on an insecure infrastructure. Incremental Advances in private key encrypted under a symmetric key shared only by Alice and the XKMS Service. Cryptology The requirement for public keys to be kept An ongoing concern for every developer of a private effectively eliminates the flexibility and cryptographic protocol is that advances in convenience that makes public key cryptography cryptanalysis might result in the underlying such an attractive technology. In effect the cryptographic algorithms being compromised. parties end up with the convenience of a Fortunately there is good reason to believe that symmetric system and the performance of an DKIM and XKMS both offer realistic asymmetric one. This is however an acceptable mechanisms for achieving a transition from one price to pay in the context of a worst case encryption algorithm to another. A paper scenario in which the objective is to transition simulation of a transition from the current RSA the network from the use of public key based based signature algorithm to an ECC algorithm technology to a symmetric system without a loss was conducted with satisfactory conclusions23. of service or functionality. The only addition required to the XKMS Quantum Computing protocol is the specification of appropriate The worst case scenario for developments in algorithm identifiers and (as keys are now cryptanalysis is the development of a quantum specific to a relationship between two users computer capable of performing calculations of rather than just a key holder) a mechanism to significant complexity. Such a machine could in allow the counterparty to the communication to principle break every public key algorithm be specified. A possible objection to this currently in use and it is prudent to assume that approach is that each message would have to this represents an intrinsic property of public key contain both a public and a private key. The use algorithms. of a public key encryption mechanism such as ECC that supports a more compact public key Fortunately quantum computing is not currently would meet this objection. believed to threaten symmetric key algorithms in the same degree and even the best quantum Conclusions computer cannot factor an RSA public key it does not know. These premises and a minor The problems of deploying ubiquitous email modification to the XKMS key information security are significant but as this paper protocol allow an XKMS configuration to be demonstrates may be met by using a established which is secure even if the adversary combination of existing protocols which are with has a quantum computer yet remains compatible the sole exception of DKIM all existing with legacy systems. standards. The challenge of email security is thus similar to the challenge facing the field of In the standard public key model everyone who networked hypertext applications in the early wants to send an encrypted message to Alice 1990s. The components all exist. The challenge uses the same public key. In the modified model that must be met is integrating those components a separate key pair is established for each in such a way that the user experience is fluent, correspondent. The key Alice discloses to Bob is seamless and learned automatically. different from the key she discloses to Carol. The use of separate key pairs for each bilateral Despite the insistence that the user interface be at relationship allows the keys to be kept least as simple as the user interface for insecure confidential so that Alice’s public key used to email the system described in this paper offers at receive encrypted email from Bob is only least as much security as existing schemes. It is disclosed to Bob. Mallet cannot then not only possible to achieve usability and cryptanalyze the key no matter how effective his security, it is impossible to achieve security in quantum computer might be. practice unless an uncompromising approach is taken to both. In effect the XKMS services at both ends of the communication act in the manner of a Kerberos24 Acknowledgements Key Distribution Center. The keying material that Bob receives from Alice’s XKMS Locate This paper has greatly benefited from the work service has an additional element carrying the and insights of many people. In particular Nico Popp, Siddharth Bajaj, Alex Deacon and Jeff Burstein at VeriSign and Mark Delaney, Miles Libbey (Yahoo), Jim Fenton (Cisco), John Levine, Harry Khatz (Microsoft), Barry Leiba Status Protocol – OCSP, IETF, June 1999. (IBM) and Stephen Farrell (Trinity College http://www.ietf.org/rfc/rfc2560.txt Dublin) in the DKIM working group. The Secure 13 Letterhead concept was developed from concepts Phillip Hallam-Baker, Shivaram H. Mysore, originally proposed by Stefan Santesson and XML Key Management Specification (XKMS 2.0), W3C Recommendation 28 June 2005, refined by Amir Herzberg at Haifa University. XKMS http://www.w3.org/TR/xkms2/ 14 W.Diffie and M.E.Hellman, New directions in cryptography, IEEE Trans. Inform. Theory, 1 Central Intelligence Agency Inspector IT-22, 6, 1976, pp.644-654. General Report Of Investigation Improper 15 Kohnfelder, Toward a Practical Public Key Handling Of Classified Information By John M. Cryptosystem, in Department of Electrical Deutch February 18, 2000 Engineering. 1978, MIT. 2 Alma Whitten and J. D. Tygar. Why Johnny 16 A. Gulbrandsen, P. Vixie, L. Esibov, RFC Can't Encrypt, 8th Usenix Security Symposium, 2782 A DNS RR for specifying the location of 1999 services (DNS SRV). IETF, February 2000. 3 Jon Callas, PGP Inc. CTO, http://www.ietf.org/rfc/rfc2782.txt. http://www.pgp.com/library/ctocorner/automagic 17 B. Ramsdell, RFC 3851 Secure/Multipurpose al.html Internet Mail Extensions (S/MIME) Version 3.1 4 Microsoft Inc, Message Specification, IETF, July 2004, http://www.microsoft.com/windows/lifecycle/def http://www.ietf.org/rfc/rfc3851.txt ault.mspx 18 J. Callas, L. Donnerhacke, H. Finney, R. 5 US Department of Defense statistic, see Thayer, OpenPGP Message Format, IETF, e.g.http://www.defenselink.mil/pubs/dod101/dod November 1998, 101_for_2002.html http://www.ietf.org/rfc/rfc2440.txt 6 WalMart Inc. see: 19 S. Boeyen and P. Hallam-Baker, Internet http://walmartstores.com/GlobalWMStoresWeb/ X.509 Public Key Infrastructure navigate.do?catg=1 Repository Locator Service, RFC 4386, 7 http://www.ietf.org/rfc/rfc4386.txt Phillip Hallam Baker, The dotCrime 20 Manifesto, To be Published Peter Gutmann, Certificate Store Access via 8 HTTP, RFC 4387 E. Allman, J. Callas, M. Delany, M. Libbey, http://www.ietf.org/rfc/rfc4387.txt J. Fenton, M. Thomas, DomainKeys Identified 21 Mail (DKIM), IETF Draft, July 9, 2005 R. Arends, R. Austein, M. Larson, D. 9 Massey, S. Rose, RFC 4033 DNS Security P. Faltstrom, P. Hoffman, A. Costello, FRC Introduction and Requirements, IETF, March 3490 Internationalizing Domain Names in 2005, http://www.ietf.org/rfc/rfc4033.txt Applications (IDNA), March 2003 22 http://www.ietf.org/rfc/rfc3490.txt E. Maler et al., Assertions and Protocols for 10 the OASIS Security Assertion Markup Language S. Santesson, R. Housley, T. Freeman, RFC (SAML). OASIS, September 2003. Document ID 3709 - Internet X.509 Public Key Infrastructure: oasis-sstc-saml-core-1.1 http://www.oasis- Logotypes in X.509 Certificates, IETF, February open.org/committees/security/ 2004, http://www.ietf.org/rfc/rfc3709.txt 23 11 Phillip Hallam-Baker, DKIM Transitions, To Phillip Hallam-Baker, Use of PKIX be published Certificates in DKIM, September 2004, 24 http://www.ietf.org/internet-drafts/draft-dkim- B. Clifford Neuman and Theodore Ts'o, pkix-00.txt Kerberos: An Authentication Service for 12 Computer Networks, IEEE Communications, M. Myers, R. Ankney, A. Malpani, S. 32(9) pp33–38. September 1994, Galperin, C. Adams, RFC 2560 X.509 Internet http://gost.isi.edu/publications/kerberos-neuman- Public Key Infrastructure Online Certificate tso.html Achieving Email Security Luxury Usability Phillip Hallam-Baker Principal Scientist VeriSign Inc. © 2004 VeriSign, Inc. Cars 2 Usability is not enough 3 Luxury 4 (A Stretch Goal) 5 Is Luxury Possible? 6 An Existence Proof 7 Video Game: A User Experience so good people will pay to use it 8 Security Goal: Protect assets against risks 9 Security Goal: Protect assets against risks 10 The real end point 11 Plenty of Technology 12 Selection not creation + DKIM – PKIX* + X.509 Logotype – OCSP* + XKMS – SCVP + PGP + S/MIME – X.500 – SAML – WS-Trust 13 Cost of Change Endpoints $$$$$ Network $$ Internet $$$$ 14 Luxury Requirement #1 Respect me. 15 GED/J d-- s:++>: a-- C++(++++) ULU++ P+ L++ E---- W+(-) N+++ o+ K+++ w--- O- M+ V-- PS++>$ PE++>$ Y++ PGP++ t- 5+++ X++ R+++>$ tv+ b+ DI+++ D+++ G++ +++ e++ h r-- y++** I don’t want to be you Users do not aspire to be computer experts Don’t try to change me. 16 Education, not Training Education is empowerment Training is mere instruction 17 Luxury Requirement #2 Anticipate my needs 18 Declarative, not imperative “The rooms to be commodious” 19 Luxury Requirement #3 Clear Use Model 20 Domain Centric Security Security policy set at the domain level 21 User choice? Buy a domain name, they cost $8/yr 22 Owning the domain name is a security issue 23 Luxury Requirement #4 Eliminate the unnecessary 24 25 26 Luxury Requirement #5 Provide the necessary and the desirable 27 What is necessary? Where did this message really come from? 28 Luxury Requirement #6 Please me 29 Demonstration 30 How 31 DKIM In IETF Process 32 DKIM Sign messages transparently 33 Secure Internet Letterhead User-friendly name for LOGOTYPE extension IETF Proposed Standard 34 Secure Internet Letterhead Add X.509 Certificate to DKIM Key Record 35 PGP, S/MIME IETF Draft standards 36 PGP, S/MIME Encryption works fine Need key discovery 37 XKMS W3C Recommendation 38 XKMS Key centric PKI 39 XKMS Register end user keys 40 XKMS Key discovery 41 XKMS Connect COTS client to extreme PKI 42 Summary Component Standard? Use DKIM IETF WG Transparent signature PKIX LOGOTYPE IETF  Secure Letterhead DNS SRV IETF  Service discovery XKMS W3C  Key management SSL/TLS IETF  Encryption PGP, S/MIME IETF  Encryption TBS TBS Packaging, selection 43 Deployment cost ~10,000 Lines of code 44 Accountability 45 Accountable Email Authentication Accreditation Consequences 46 Accountability for all 47 Conclusion Our achievements are only limited by our aspirations 48 Thank You Questions © 2004 VeriSign, Inc. DomainKeys Identified Mail (DKIM) and PKI Jim Fenton NIST PKI Conference 2006 © 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 1 DKIM Background • DKIM is a proposal for e-mail message signatures being standardized by IETF • Key distribution is based on DNS A field in the signature specifies the location of the key Keys are stored in the _domainkey subdomain of the signer’s DNS hierarchy • Raw keys, not certificates, are used • Signatures represent the signing domain, not the actual author However, the domain owner may delegate signing authority NIST PKI Conference 2006 © 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 2 Deployment Model – Simple Case SMTP MTA MTA SMTP SMTP DNS Public key query / response Signer Verifier MSA MDA DNS POP, IMAP, SMTP, MAPI, etc. MAPI Mail Origination Network Mail Delivery Network NIST PKI Conference 2006 © 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 3 “Frequently” Asked Questions • Why not use an existing signature standard such as S/MIME? • If not, why not use certificates for key management? NIST PKI Conference 2006 © 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 4 What About S/MIME? • The signature semantics are wrong S/MIME signatures represent the author, not the domain owner S/MIME (and PGP) signatures are still useful for signing message content with the “usual” semantics • Transparency of signatures is important DKIM signatures will be applied for all mail from some domains Users (senders and recipients) may not expect this Help desk load is a concern – and impedes deployment NIST PKI Conference 2006 © 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 5 What About Certificates? • Concern about disenfranchising some domains by the requirement to get a cert Could be costly for third world • Must be able to revoke signing authority quickly Frequent updates to a potentially very large CRL • Size matters • New requirement that domain owner be in trust chain Different from current low-assurance certificates NIST PKI Conference 2006 © 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 6 Revocation issues • Delegation of signing authority is needed to support important use cases Outsourced applications (benefits, etc.) E-mail marketing Mobile users who can’t/don’t submit messages to domain • Some domains will issue signing keys to some users • What happens when a user with a key leaves the domain? Keyholder may be terminated for cause (e.g., abuse) Very rapid (within minutes) revocation required NIST PKI Conference 2006 © 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 7 Conclusion • DNS provides a useful pseudo-PKI for DKIM Light weight transaction Cached by the infrastructure Although we do need to consider infrastructure burdens Easily revoked Under direct control of the domain NIST PKI Conference 2006 © 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 8 DKIM Seen Through a PKIX- Focused Lens April 5, 2006 Tim Polk tim.polk@nist.gov Observations on E-Mail • Spam is rapidly overwhelming all that is good about email – I delete 90% of my mail unread – Much of what is left is garbage – A small percentage of what I deleted was probably important (I’ll never know!) • Anything that helps me identify messages I should read is awesome Does DKIM Solve The Right Problem? • While the techniques specified by the DKIM working group will not prevent fraud or spam, they will provide a tool for defense against them by assisting receiving domains in detecting some spoofing of known domains • Solve may be too strong a word, but I think it is on target Observations on DKIM • In DKIM, there is no dependency on public and private key pairs being issued by well- known, trusted certificate authorities – A feature for deployment, but perhaps also the Achilles heel • In DKIM, the verifier requests the public key from the claimed signer directly – And trusts it because it got it from the DNS? Is The Foundation Sufficient? • DKIM relies on DNS as the initial mechanism for publishing public keys – DNS poisoning is not that difficult, it just isn’t that interesting in most cases. DKIM makes it interesting. • DKIM sender signing policy statements are expected to be very simplistic – Fine to start, but experience shows one- size-fits-all policies don’t fit anyone DKIM Solution Strength • IMHO, DKIM provides an incremental improvement in security – For the near term, that is all we can ask or expect – For the long term, it isn’t nearly good enough Conclusions • DKIM will be far better than nothing, and really ought to be deployed aggressively. • DKIM’s success will provide real incentives for attackers – Spammers will exploit the DNS-based key distribution and weak policy schemes to alter recipient behavior • The good news: DKIM is designed to be extensible to other key fetching services – An X.509 PKI based solution should be one of the well defined services Handle-DNS Integration Project Report Handle-DNS Working Group CNNIC/CNRI Project Objective • Take Advantage of the Handle System to provide security service for DNS namespace, including: – Secured DNS resolution (whenever needed) – Discretionary Administration & dynamic update – Access control & privacy protection – Delegation of credential validation • Co-exist with existing DNS operation, no change needed to DNS client. Project Background • CNRI – Non-profit research institute – Developed Handle System in Java, specified in RFC 3650,3651,3652. – Open source distribution at http://www.handle.net • CNNIC – “.cn” TLD registry in China – Developed Handle System in C – Integrated with DNS BIND9 – Project web page: http://hdl.cnnic.cn Handle System Overview • A global identifier service, to provide identifier service for any digital resource over the Internet. • Distributed, scalable service infrastructure similar to DNS. • Efficient name-resolution and administration protocol supports both TCP/UDP connection. • Build-in security options for both name resolution and administration. Handle System Service Framework GHR LHS LHS Client The Handle System is a collection LHS LHS of handle services, each of which consists of one or more replicated sites, each of which may have one or more servers. Site 1 Site 2 Site 2 Site 1 Site 3 …... Site n #1 #2 #1 #2 #3 #4 ... #n 123.456/abc URL 4 http://www.acme.com/ URL 8 http://www.ideal.com/ Handle System Security – Security handle resolution, including option for data confidentiality and service integrity checking – Discretionary namespace and identifier attribute administration, independent from host-admin, that allows creation, deletion, and modification of identifier and/or identifier attributes. – Standard access control model per individual identifier attribute, essential for privacy protection. – Standard mechanism for credential validation per individual handle attribute. Handle-DNS Implementation • Basic Implementation – Handle Server in C/C++ (server/client) – BIND 9 standard distribution • Additional Modules – DNS Interface integrated with handle server – Cache/Preload Module – Database Connection Pools – C-Version Handle-DNS Admin Toolkit • Support DNS resolution and Zone load • Performance Improvements – Exceptional Processing – Memory Leak Protection – Thread Pool Management Design & Implementation • Integrated Handle-DNS server DNS Protocol BIND 9.3.0 53 Handle Server DNS interface Handle Protocol 8000 Handle interface 2641 Handle-DNS Admin Toolkit • C-Version Handle-DNS Admin Toolkit – Supporting DNS Resource Record Query & Management – Supporting DNS Zone File Upload Benchmark • Benchmark Configuration – Client and Server in same LAN Client: Dell PowerEdge Server Machine 2.8G CPU / 1G RAM / 38GB HardDisk 100 Mbps Server: Same configuration as the client. 100 Mbps Cisco Switch Benchmark • Testing Method – Compare resolution performance among the C-Version Handle-DN S Server and Java-Version Handle Server under the same hardware configuration. • Handle Protocol – Test Software written by CNNIC • DNS Protocol – QueryPerf, benchmark software supplied by BIND • Database – MySQL, 1M Handle Records Handle-DNS Client Handle-DNS Server Java-Version Handle (C-Version) Server Benchmark (Java/C) • TCP Interface for Handle-DNS server • Comparison between Java-Version and C-Version – Resolution speed • 5~10 ms C-Version, 25~35 ms Java-Version • 2.5~7 Times Performance Improvement for Java-Version – # of concurrent request • 40,000 queries (Handle-DNS) • 4,000 queries (Java) – CPU usage • 90%, Java • Below 10%, C Benchmark (Handle-DNS/BIND) • UDP Interface for DNS Protocol • Compared to BIND 9.3.0 – Comparable Resolution Performance • Larger size than DNS Records Prototype Applications • ENUM • ENUM Puts Telephone Numbers in DNS • Mapping PSTN Phone Number to URLs – One Number For All Services on Internet • Based on DNS Protocol – ENUM Zones, “e164.arpa.” – Using DNS “NAPTR” Resource Records – Using DNS Resolution NAPTR RRs +17036208990 tel:+15712205650 sip:samsum@cnri.reston.va.us http://www.cnri.reston.va.us 0.9.9.8.0.2.6.3.0.7.1.e164.arpa mailto:samsum@cox.net Prototype Application (ENUM) • A Simple ENUM Call Flow Prototype Application (ENUM) • Handle-ENUM Secure Resolution & Administration – Secure Resolution • Authentication – Access Control • Private ENUM records – Distributed Admin Prototype Application (Secure Resolution) • Secured DNS resolution via Handle Protocol Interface – Secure DNS Resolution • Man-in-middle attack – Privacy Protect – DNS Administration Future Plan • Package the Handle-DNS software for public release. • Deploy Handle-DNS server in “.cn” TLD registry and it s subsidiaries. • Establish ENUM service and client software based on Handle-DNS interface. Thanks! DEMO International Grid Trust Federation Michael Helm, ESnet/LBL On behalf of IGTF & TAGPMA 4 April 2006 What Are Grid PKIs For? • We exist to serve the grid community in terms of authentication – – X.509 certificates are an essential component of Grid security mechanisms – Authentication supports diverse authorization methods (including ongoing research) – X.509 Certification Authorities provide a focal point for policy management and key lifecycle – IGTF and regional PMAs provide coordination and interoperability standards for Grid PKIs 08/01/17 IGTF - NIST 2 Outline (More than what we have time for today) • Essentials on Grid Security • International Grid Trust Federation (IGTF) • IGTF component PMAs • Certificate “profiles” 08/01/17 IGTF - NIST 3 Essentials on Grid Security • Access to shared services – cross-domain authentication, authorization, accounting, billing – common generic protocols for collective services • Support multi-user collaboration – may contain individuals acting alone – their home organization administration need not necessarily know about all activities – organized in ‘Virtual Organizations’ • Enable ‘easy’ single sign-on for the user – the best security is hidden from the user as much as possible • And leave the resource owner always in control 08/01/17 IGTF - NIST 4 Virtual vs. Organic structure • Virtual communities (“virtual organizations”) are many • An individual will typically be part of many communities – but will require single sign-on across all these communities Virtual Community C Person E Person B File server F1 (Researcher) Compute Server C1' (Administrator) (disk A) Person A Person D (Principal Investigator) (Researcher) Person B Person E (Staff) Person D File server F1 (Faculty) Compute Server C2 Compute Server C1 (Staff) (disks A and B) Person A Person F (Faculty) (Faculty) Person C (Student) Compute Server C3 Organization A Organization B 08/01/17 IGTF - NIST Graphic: GGF OGSA Working Group 5 Stakeholders in Grid Security Current grid security is largely user centric – different roles for the same person in the organic unit and in the VO • There is no a priori trust relationship between members or member organizations – Virtual Organization lifetime can vary from hours to decades – VO not necessarily persistent (both long- and short-lived) – people and resources are members of many VOs • … but a relationship is required – as a basis for authorising access – for traceability and liability, incident handling, and accounting 08/01/17 IGTF - NIST 6 Separating Authentication and Authorization • Single Authentication token (“passport”) – issued by a party trusted by all (“CA”), – recognised by many resource providers, users, and VOs – satisfy traceability and persistency requirement – in itself does not grant any access, but provides a unique binding between an identifier and the subject • Per-VO Authorisations (“visa”) – granted to a person/service via a virtual organization – based on the ‘passport’ name – acknowledged by the resource owners – providers can obtain lists of authorised users per VO, but can still ban individual users 08/01/17 IGTF - NIST 7 International Grid Trust Federation IGTF is the trust “glue” for Grids. The Grid is a distributed computing paradigm and middleware that is supporting large scale, world-wide scientific research such as the LHC in physics. IGTF is composed of 3 regional PMAs, each supporting a separate zone in the world: EUGridPMA, TAGPMA, and APGridPMA. How can we integrate better, with other PKI initiatives – how do we determine when and whether this makes sense? 08/01/17 IGTF - NIST 8 Extending Trust: IGTF – the International Grid Trust Federation • Common, global best practices for trust establishment • Better manageability of the PMAs APGridPMA TAGPMA The Americas European Asia Pacific Grid PMA Grid PMA Grid PMA 08/01/17 IGTF - NIST 9 Grid PKI Software and Limitations • http://www.globus.org/toolkit/docs/4.0/security/ – However, many Grid environments operate in legacy (pre 4.0) mode • PKI: Authentication – X.509 certificates – close to IETF PKIX RFC 3280 – Proxy certificates – RFC 3820 – short lived delegated rights • Also, numerous legacy (pre-3820) implementations – Mutual authentication based on TLS model • openssl is essential software component • Authorization – many different solutions – Simple lists and map files (like UNIX account services) – Account management services – Delegated rights attributes in proxy certificates – X.509 authorization certificates – GGF-managed Web Services-based authorization services – Shibboleth-Grid bridging – And more… • Credential management – Software tokens – MyProxy – a credential store – Hardware tokens 08/01/17 IGTF - NIST 10 Federation Model for Grid Authentication CA 2 CA 1 relying charter CA n party n guidelines CA 3 acceptance relying process party 1 • A Federation of many independent CAs – Policy coordination based on common minimum requirements (not ‘policy harmonisation’) – Acceptable for major relying parties in Grid Infrastructures • No strict hierarchy with a single top – spread liability and enable failure containment (better resilience) – maximum leverage of national efforts and subsidiarity 08/01/17 IGTF - NIST 11 IGTF Federation Common Policy IGTF Federation Document APGridPMA • CA A1 •… trust EUGridPMA relations Subject • CA E1 • CA E2 Namespace Assignment •… TAGPMA • CA T1 •… Common Authentication Profiles Distribution Naming Classic SLCS Conventions (EUGridPMA) (TAGPMA) worldwide relying parties see a uniform IGTF “mesh” 08/01/17 IGTF - NIST 12 International Grid Trust Federation “The IGTF” - WWW.GridPMA.org • 2002: GGF turns down PMA proposal – Grassroots effort begins • Commissioned: Mar 2003 (Tokyo) - - Chartered: October 5 th, 2005 at GGF 16 (Chicago) • Federation of European, Asian, and Western Hemisphere Policy Management Authorities – Focused on Identity management and authentication for Grids • Regional Authorities: • EU Grid Policy Management Authority – EGEE: Enabling Grids for E-science in Europe • Asian Pacific Policy Management Authority – APGrid: National Institute of Advanced Industrial Science and Technology • The Americas Grid PMA – newly chartered Sep 2005 – Canada and USA (DOE, NSF); Latin American organizations soon • Establishment of top level CA registries and related services • Root CA certificates, CA repositories and CRL publishing points. • EU Grid PMA registry – de facto (CNRS: French National Center for Scientific Research) • Asian Pacific CA registry (AP PMA) • TERENA TACAR (TERENA Academic CA Repository) • Standards – Certificate policies, Certificate profiles, Accreditation – Global Grid Forum publishes standards and community best practices . 08/01/17 IGTF - NIST 13 IGTF (2) • IGTF Federation – Namespace specification and allocation • NB: Grids do not use directory-managed naming – Grid PKI support file “Gold” distribution • Provided to middleware packagers such as VDT, large scale Grids &c • IGTF Managed Certificate profiles – Certificate Profiles: Subset of certification practices describing essential, distinguishing characteristics of Grid certificate usage – Developed by Regional PMA or member organization – Current profiles: • “Classic” X.509 CAs – Development managed by EUGridPMA (www.eugridpma.org) – Influenced by NIST and PKI industry best practice • Short-Lived Certificate Services – Development managed by TAGPMA (www.tagpma.org) – Bridge site authentication services to Grid-compatible PKI • Experimental CA – Development managed by APGridPMA (www.apgridpma.org) – Profiles that need to be developed: • Bridge – based PKI (policy mapping, transitive trust) • Active Credential Store (eg MyProxy-managed X.509 certificates) 08/01/17 IGTF - NIST 14 Building the federation • Providers and Relying Parties together shape the common minimum requirements – Several profiles for different identity management models • different technologies – Authorities testify to compliance with profile guidelines – Peer-review process within the federation to (re) evaluate members on entry & periodically – Reduce effort on the relying parties • single document to review and assess for all Authorities • collective acceptance of all accredited authorities – Reduce cost on the authorities • but participation in the federation comes with a price • … the ultimate decision always remains with the RP 08/01/17 IGTF - NIST 15 EUGridPMA Green: Countries with an accredited CA  The EU member states (except LU, MT)  + AM, CH, IL, IS, NO, PK, RU, TR, “SEE-catch-all” Other Accredited CAs:  DoEGrids (.us)  GridCanada (.ca)  CERN  ASGCC (.tw)*  IHEP (.cn)* * Migrated to APGridPMA per Oct 5th, 2005 08/01/17 IGTF - NIST 16 EUGridPMA • www.eugridpma.org • Features: – ~36 members: most from EU, some from closely affiliated countries chaired by David Groep (NIKHEF) – The senior partner – “Classic” X.509 Grid profile • Member organizations/countries: – Canonical list: http://www.eugridpma.org/members/index.php – Membership includes many European national and regional (eg Nordunet, Baltic Grid) Grid projects; Canarie (Canada); DOEGrids and FNAL (US); significant relying parties such as LHC; several AP Grid CAs 08/01/17 IGTF - NIST 17 The Americas Grid PMA Members HEBCA/USHER/Dartmouth College Texas High Energy Grid Fermi National Laboratory San Diego Supercomputing Center TeraGrid Open Science Grid DOEGrids CANARIE Texas High Energy Grid EELA Venezuela: ULA Chile: REUNA Mexico: UNAM Argentina: UNLP Brazil: UFF 08/01/17 IGTF - NIST 18 TAGPMA • The Americas Grid PMA – Chartered Sep 2005 – Very new • www.tagpma.org • Features: – ~9 members: Canarie (CA) and US, and now EELA – Several Latin American Grid projects to join soon – Chaired by Darcy Quesnel (CANARIE) – Short Lived Certificate Server profile • Member organizations/countries: – Canonical list: http://www.tagpma.org/members  1st TAGPMA member meeting: 27-29 Mar 2006, Rio de Janeiro (RDP) •HEBCA/USHER/Dartmouth College •EELA •TeraGrid •Venezuela •Texas High Energy Grid •Chile •DOEGrids (US-DOE Labs) •Mexico •Fermi Lab (FNAL) •Argentina •San Diego Supercomputer Ctr •Brazil •Open Science Grid (OSG) •CANARIE (Grid Canada) 08/01/17 IGTF - NIST 19 EELA E-Infrastructure Shared Between Europe and Latin America • Through specific support actions, to position the Latin American countries at the same level of the European developments in terms of E-Infrastructure (Grids, e- Science, e-Infrastructure) • http://www.eu-eela.org • Kickoff meeting 30 Jan 2006 • Grid CAs at early phase of lifecycle – Design & initial roll-out; accreditation soon • Membership and project management: – http://www.eu-eela.org/public/eela_about_partners.php – Brazil: Many other PKI activities in play 08/01/17 IGTF - NIST 20 Asia Pacific PMA • Australia APAC • China SDG, IHEP Beijing • Hong Kong HKU • India U. Hyderabad • Japan AIST, NAREGI, KEK, Osaka U. • Korea KISTI • Malaysia USM • Singapore NGO • Taiwan ASGC, NCHC • Thailand NECTEC • USA SDSC 08/01/17 IGTF - NIST 21 APGridPMA (Material provided by David Groep, IGTF chairman, from TF-EMC2 update Sep 05 • www.apgridpma.org • Features: – ~16 members from the Asia-Pacific Region, chaired by Yoshio Tanaka (AIST) – 7 Production CAs are in operation • AIST, APAC, ASGC, IHEP, KEK, KISTI, NAREGI – “Experimental” CA profile  Auditing – Standard practice & GGF effort • Member organizations/countries: – Canonical list: https://www.apgrid.org/CA/CertificateAuthorities.html •AIST (Japan) •NAREGI (Japan) •HKU (Hong Kong) •APAC (Australia) •NCHC (Taiwan) •U.Hyderabad (India) •ASGC (Taiwan) •NECTEC (Thailand) •Osaka U. (Japan) •IHEP (China) •NGO (Singapore) •USM (Malaysia) •KEK (Japan) •SDG (China) •KISTI (Korea) •SDSC (US) 08/01/17 IGTF - NIST 22 Certificate Profiles • Classic PKI – DOEGrids as example • Short Lived Certificate Services – “Rotary” example – FNAL KX509 CA • Experimental – Use at conferences, demos, short term projects • Other work – Bridge PKI • Grid PKI has no concept of policy mapping or levels • Grid PKI has no concept of transitive trust • US HEBCA needs this profile • Other services may be required as a result – Active Credential Store PKI • Extend the MyProxy model – link a CA to credential store • Core problem: Service owns user private keys. 08/01/17 IGTF - NIST 24 Classic X.509 Certificate Profile • Comprehensive Security Requirements for CA services – Evolved: Grid operational needs vs Security best practices – Hardware Security Modules or Offline operation • Two fairly distinct classes of end-entity certificates: – Hosts and “Grid services” – essentially TLS server certs • Evolving concepts of ownership and rights  Users and software agents – Client certificates • Strict Identity management and verification requirements • We concentrate on this class here; but hosts equally important – Missing – not yet defined: software signing; certificates for abstract entities (processes) 08/01/17 IGTF - NIST 25 DOEGrids: Classic X.509 PKI Offline Vaulted Root CA Grid User PKI Systems Hardware Security HSM Modules Firewall Internet Access controlled racks Secure Data Center Building Security 08/01/17 LBNL Site security IGTF - NIST 26 Intrusion Detection Grid Classic PKI People Certificate Workflow Registration Manager (RM) PKI1.DOEGrids.Org 4 CA Sponsor 1 2 2 4 3 8 5 Project Registration DBMS 9 Authority Subscriber 6 7 (RA) Agent 1. Subscriber requests Certificate Certificate Manager (CM) 2. RM posts signing request notice (Certificate Signing Engine) 3. The RA for the Subscriber retrieves request 7. CM issues certificate 4. The RA agent reviews request with Grid project 8. RM sends Email notice to Subscriber 5. The agent updates/approves/rejects request 9. Subscriber picks up new certificate 6. Approved 08/01/17 Certificate Request is sent to CM IGTF - NIST 27 FNAL FNAL KCA: Workflow Kerberos KDC FNAL User certificate workflow Update FNAL Account 1. Authenticate to KDC Services 2. Receive Kerberos TGT 3. Present Kerberos ticket and CSR to CA 4. KX509 CA returns short lived certificate 5. Use certificate with Grid services 1 2 3 FNAL KX509 Certification 4 Authority 5 Grid Gridresources resources (FNAL,external) (FNAL,external) 08/01/17 IGTF - NIST 28 Short Lived Certificate Service Architecture Sources of Identity Grid Identity Mint Short lived Grid Identity/Proxy/Attribute Certificates LDAP Authentication Protocol Query/Response Kerberos slic slic RADIUS slic slic Shibboleth slic slic IdP Certificate Authority Windows Domain CA can “rotor” through Other suite of authentication Add custom extensions PKI methods as needed / delegations as Local Site / VO needed Authentication infrastructure “Rotary” SLCS • Concept is expansion of KX509 – like operation from enterprise to the scope of a Virtual Organization, and national network resource • Mostly, a matter of integration and federation – The federation agreements and interop are not trivial • Shibboleth, and rotary concept, need testing • CA can be replicated into (secure) sites – Our HSM technology may be able to change the definition of “secure site” 08/01/17 IGTF - NIST 30 Certificate Validation Service • Outsource certificate trust decisions to a trusted service Benefits: – Light client – maintains one relationship, not 10’s-100’s • Obviously, we cannot expect to eliminate ALL client trust decisions, nor is that desirable. – Service can adapt more rapidly to changing conditions – Replication of validation service can be managed more effectively – Provide certificate path discovery and path validation for bridge PKI architecture • Essential for Grid support of Higher Education Bridge CA • OCSP is a subset, and analogy – Online Certificate Status Protocol • However: some OCSP deployment scenarios exacerbate existing scaling problems. 08/01/17 IGTF - NIST 31 Current Problems • Usability vs Security • Integration with commercial and bridge CA infrastructures • Integration with alternative and/or legacy authentication systems • “Personal Appearance” and LoA • Difficulty translating CP/CPS to something understandable and usable by community 08/01/17 IGTF - NIST 32 Contacts & Acknowledgements • IGTF: David Groep – davidg@eugridpma.org • TAGPMA: Darcy Quesnel - darcy.quesnel@canarie.ca Alan Sill (secretary) - Alan.Sill@ttu.edu • EELA: Diego Carvalho - d.carvalho@ieee.org • HEBCA: Scott Rea - Scott.Rea@Dartmouth.edu • DOEGrids – doegrids-ca-1@doegrids.org (Dhivakaran Muruganantham, Tony Genovese, Michael Helm) 08/01/17 IGTF - NIST 33 Open Science Grid Use of PKI: Wishing it was easy A brief and incomplete introduction. Doug Olson, LBNL (dlolson@lbl.gov) PKI Workshop, NIST 5 April 2006 www.opensciencegrid.org Contents • Overview of OSG • Why we use X.509 PKI • How we use it • What’s wrong with it • Comments 5 April 2006 D. Olson, NIST PKI Workshop 2 5 April 2006 D. Olson, NIST PKI Workshop 3 www.opensciencegrid.org 21 registered Scientific Virtual Organizations 51 Compute resources, 6 Storage resources (~ 20 additional on integration grid) O(1000) running and O(1000) pending jobs (low usage due to growing pains) Strongest driver today is LHC science program. Many other science programs are also users and participants. Interoperation with EGEE, Teragrid, numerous regional & campus grids. 85% of DOEGrids PKI certificates, ~ 1000 OU=People, 3000 OU=Services 5 April 2006 D. Olson, NIST PKI Workshop 4 How is Trust Established? (or What does “Trust” mean?) • $1B+ science programs have 10+ years scientific, political, technical development phase during which collaborations are established. • Many MOUs are signed detailing responsibilities – Construction of machine/accelerator/telescope/… – Construction of experimental equipment/detectors – Computational resource commitments • Membership in a scientific collaboration is controlled with governing procedures • The research program defines who is supposed to work together.   PKI is a technical detail of the computing plans   The definition of which organizations must trust each other was established before anyone who understands PKI got involved, so the question is “How to trust?” more than “Who to trust?” – However, OSG promotes an opportunistic computing model and would like to match VOs and resource providers with little or no advance agreements.   “Trust” within the PKI means what are the acceptable range of policies and procedures so the computing resource providers and scientists can work together. 5 April 2006 D. Olson, NIST PKI Workshop 5 Why do we use PKI? • Globus GSI • We have built and are growing a grid and use whatever security infrastructure is available and practical. • Interoperability with the world-wide open science community is essential. – Technical aspects • Functioning CA/RA • This means Globus pre-WS GSI (& WS GSI) X.509 • Additional supporting infrastructure has been deployed: VOMS, GUMS, Prima, CA/CRL distribution – Bureaucratic aspects • Ability to establish and maintain trust by sites, VOs, users • Accredited CAs • Therefore: TAGPMA and IGTF 5 April 2006 D. Olson, NIST PKI Workshop 6 How do we use PKI? • DOEGrids PKI operated by ESnet is our primary provider. – CN=,OU={People|Services},DC=doegrids,DC=org • OSG has asked TAGPMA to accredit CA’s used in the grid community in the america’s and to provide us with the accredited list. • We operate the distributed human RA network to authenticate requests. Signed email & telephone. • End Entities hold private keys. • OU=Services certs used as SSL certs for host & service identification. • Virtual Organizations (VOs) manage users via VOMS servers, using DN of EE and issuer as identifier, and holding additional attributes for authorization. – User gets a short lived proxy certificate with an extension holding authZ attributes signed by the VOMS server 5 April 2006 D. Olson, NIST PKI Workshop 7 How do we use PKI? (Validation, AuthZ) • Certificate validation environment during grid transaction – Proxy certificates (RFC 3820) – Trusted CA certs & CRL URLs downloaded from VDT – CRL updates using EDG tools on each resource (from EU DataGrid, now EGEE2) • CRLs are only for long lived certs. No tools for revoking just a delegated proxy certificate. • Resource authZ – “Recommended” means is to do Role Based AuthZ by use of Prima & GUMS to interpret VOMS extended proxy certs and map to local UID/GID based on attributes signed by VOMS server. – Many sites use classic pre-WS GSI and tools to download grid- mapfile entries from VOMS servers 5 April 2006 D. Olson, NIST PKI Workshop 8 What is wrong with it (1) • Previous slide: In other words, there was a lot of missing infrastructure for using PKI for user authN/authZ for grid transactions. • Incomplete infrastructure for managing user private keys – Just files in users home directory(ies) – Standardization of end-user environment in open science community is impossible – Myproxy helps • substitution of private key/passphrase with username/password (huh???) • Reduce or eliminate end-user private key management – Short Lived Certificate Service (SLCS) profile is moving through TAGPMA, IGTF that will apply to services like KCA (at FNAL & PSC), and a MyProxy-based CA issuing short-lived certs. 5 April 2006 D. Olson, NIST PKI Workshop 9 What is wrong with it (2) • X.509 needs mapping to resource security infrastructure (uid/gid), which is site specific – Gridmap-file • but proxy does not follow process group, except for reliance upon same uid and it is common practice to map entire VO to single uid. • Maps only DN so same person wanting different roles needs different DNs – Or VOMS/Prima/GUMS infrastructure for role-based access control – Ownership of long lived data??? • Use short lived proxies to allow single sign-on – then do credential renewal to get long enough lifetime • Revocation is cumbersome & slow – Symmetric with initial authentication & certificate issuance – Site requirements for incident response need faster mechanism to suspend a users privileges • Certificate lifecycle management is rocky for us, but not the biggest trouble • … 5 April 2006 D. Olson, NIST PKI Workshop 10 Comments • PKI “works reasonably” for server certificates • Infrastructure surrounding PKI for end user certificates is incomplete and ad-hoc • I hope you all paid close attention to Angela Sasse’s talk yesterday. – I think people understand username/password and email addresses and this should be enough ID tokens for end users. • AuthZ infrastructure being tied to PKI suffers from mismatch between user requirements and underlying resource functionality, i.e., the trouble is not due to PKI, just coupled because of PKI-based ID 5 April 2006 D. Olson, NIST PKI Workshop 11 Extra Slides 5 April 2006 D. Olson, NIST PKI Workshop 12 Example EGEE grid job http://roc.grid.sinica.edu.tw/doc/LCG-2-UserGuide.html#SECTION00053100000000000000 5 April 2006 D. Olson, NIST PKI Workshop 13 A large workflow example From: http://pegasus.isi.edu/pegasus/publications/sciprog_submitted.pdf Pegasus: Mapping Scientific Workflows onto the Grid Ewa Deelman, James Blythe, Yolanda Gil, Carl Kesselman, Gaurang Mehta, Sonal Patil, Mei-Hui Su, Karan Vahi, Miron Livny Scientific Programming, January 2005 5 April 2006 D. Olson, NIST PKI Workshop 14 Authorization infrastructure http://www.fnal.gov/docs/products/voprivilege/ 5 April 2006 D. Olson, NIST PKI Workshop 15 ECC Support in Future Products Microsoft Red Hat Sun Why ECC? ● Security scales directly in proportion to key size. ● Performance also scales to key size. ● Ideal for crypto on devices (such as smart cards). ● Mandated by NIST for federal agencies (SP- 800-57). ● ECC is now part of several standard, including TLS and SMIME. ECC Performance versus RSA 256 (521/15360) 198.20 117.80 Intel 64-bit ECC/RSA Performance - 80% restarts 450.00 400.00 350.00 300.00 Ops/Second ECDH ops/sec ECDHE ops/sec 250.00 RSA ops/sec Column E 200.00 Column F 150.00 Column G 100.00 50.00 0.00 80 using ECDH is ECC (192/1024) 112 (224/2048) 128 (256/3072) Diffie-Hellman, TLS_ECDH_ECDSA_WITH_RC4_128_SHA 192 (384/7168) 256 (521/15360) ECDHE is ECC using Ephemeral Diffie-Hellman (which gives “Perfect forward secrecy”), TLS_ECDHE_ECDSA_WITH_RC4_128_SHA RSA is SSL3_RSA_WITH_RC4_128_SHA Cryptographic Strength in Bits Vulnerability versus Key Size Symmetric Key RSA Key ECC Key Good until... 80 1024 160 2010 112 2048 224 2030 128 3072 256 Beyond All vendors are supporting key sizes that go well beyond 256 bit, and support NIST endorsed curves. Vendor ECC offerings - Microsoft ● ECC is supported in a number of Microsoft products: – IE7, IIS, Certification Authority – Interfaces: SSPI, CNG, CAPI2 – Standards: IETF ECC TLS draft, PKCS12, PKCS7 ● Shipping in Vista client ● Available in Beta 2 and as part of the CTP program Vendor ECC offerings – Red Hat ● TLS support for ECC in Future versions of – Firefox and Thunderbird – Red Hat Directory Server – Red Hat Certificate Server – Fortitude (Apache plus mod_nss) ● Supports “Suite B” ECC curves. ● Supports ECDH and ECDHE cipher suites Vendor ECC offerings - Sun ● Broad ECC support in Sun's product portfolio announced at RSA 2006 ● First ECC-enabled offering: Sun Java Web Server 7.0 included in JES 5.0 (available later in 2006) – supports all elliptic curves currently defined by NIST (including Suite-B curves), SECG and ANSI – Supports ECDH and ECDHE cipher suites ● ECC support planned for future versions of – Java SE (full support for ECC ciphers, ECC crypto support initially via PKCS#11 and later via a pure Java library) – SPARC processors, Solaris, other middleware (Footnote: additional details available under a non-disclosure agreement) Interop Testing ● Product testing – IE-7 and Firefox with Fortitude – IE-7 and Firefox with IIS – IE-7 and Firefox with Java Web Server 7.0 – IE-7 and Firefox with Apache with mod_ssl ● Other tests – NSS verifying Microsoft generated certificates. – Microsoft verifying NSS generated certificates. More Info... ● Sun: – vipul.gupta@sun.com ● Red Hat: – rrelyea@redhat.com ● Microsoft: – arimed@microsoft.com – kelviny@microsoft.com Industry ECC standard ● ANSI, "Public Key Cryptography For The Financial Services Industry: The Elliptic Curve Digital Signature Algorithm(ECDSA)", ANSI X9.62. ● ANSI, "Public Key Cryptography for the Financial Services Industry, Key Agreement and Key Transport Using Elliptic Curve Cryptography", ANSI X9.63, 2001. ● NIST, "Digital Signature Standard", FIPS 186-2, 2000 (defines "Recommended Elliptic Cruves for Federal Government Use", Appendix 6). ● "ECC Cipher Suites for TLS", draft-ietf-tls-ecc-12.txt (approved for publication as IETF RFC). PKCS#11 and Mac OS X Keychain Integration Work in Progress Ron DiNapoli Cornell University, CIT/ATA Why Is This Needed? ♦ Apple Keychain Services is the recommended method (by Apple) for performing certificate based operations. ♦ The Keychain is the only mechanism through with certificate based operations can occur in Apple’s native apps Mail.app Safari What Does it Provide? ♦ Keychain/PKCS#11 integration allows any PKCS#11 device to be used via Keychain Services under Mac OS X (Tiger only) ♦ Operations currently supported (by infrastructure): - Signing operations - Encryption/Decryption - Changing PIN Why Doesn’t Apple Provide This? ♦ Apple wants the user to simply “plug the token in and have it work” ♦ PKCS#11 doesn’t quite have this experience - User would need to specify a PKCS#11 library to be dynamically loaded for the token in question How Does it Work? ♦ Beginning in Mac OS X v10.4 (Tiger) Apple added a component called Tokend to their security architecture - Used to handle hardware tokens - Some cards/tokens “supported” out of the box: – BELPIC, CAC, MuscleCard - OpenDarwin project available to let anyone define (program) their own Tokend A Customized Tokend ♦ To add support for a new token: - Take existing Tokend project (OpenDarwin) and modify it. - Name resulting executable something different. - Place in /System/Library/Security/tokend/ How Many Tokends? ♦ Any system may have multiple Tokends - Installed in /System/Library/Security/tokend/ - When token inserted, each tokend is launched and a standard method is called to determine if a given tokend should handle the inserted token Talking to the Token ♦ Once a Tokend has control, it may communicate with the token in any of the following ways: - Using built in methods and ISO-7816 commands - Using other libraries which handle communicating with the token, such as: – PKCS#11 – OpenSC libraries How Is PKCS#11 Used? ♦ PKCS#11 usually involves a shared library loaded at run time. ♦ How does Tokend know what PKCS#11 library to load? - Implemented a System Preference Pane - Manages a preferences file - The custom Tokend consults the preferences file to find out the name(s) of the available PKCS#11 libraries Preferences Pane What Is “In” the “Distribution”? ♦ Custom tokend deamon (tokend.PKCS11) - Installed in /System/Library/Security/tokend/ ♦ Tokend/PKCS11 System Preferences Pane - Installed in /Library/PreferencePanes/ ♦ Preferences file (tokend.PKCS11.prefs) - Installedin /Library/Preferences/ Demonstration Using tokend.PKCS11 Limitations ♦ No support for key generation - Limitation of Tokend infrastructure - Enhancement request submitted (4479978) ♦ No support for multiple certificates on a single token - Still investigating where the problem lies ♦ Limited vendor support for PKCS#11 (Mac OS X) - Aladdin today - SafeNet (iKey) Q3 2006 - OpenSC today Where Can I Get It? ♦ http://ata.cit.cornell.edu/cit/ata/Project-PKI.cfm ♦ Look for “Mac OS X PKCS#11 Tokend” in the sidebar. ♦ Source will be available - Pending Cornell’s deployment of SourceForge - Will require you have installed darwinbuild Q&A Any Questions? WS-Mobile is designed to cope with all possible user-scenarios, eliminating the need for building special infrastructures for wireless and local usage. In effect WS-Mobile can replace most smart cards that in some way are bound to an individual. Multiple security mechanisms OTP PKI TPM Thousands of keys and passwords! Supporting indirect resources MERCHANT  ”Pay"  ”Payment Request" "User Accept”   "User Checkout" Compatibility Payment systems 3D Secure, .PAY etc Authentication systems SAML, Liberty, Passport, etc B2B purchasing systems SAP’s OCI, OBI, Ariba’s punchout, etc. New uses, adding to the“thin client concept” Are Off-line Root CAs Safer than On-line CAs? David A. Cooper NIST April 5, 2006 What is an Off-line CA? Disconnected from network Turned off most of the time Issues CRLs infrequently (e.g., once a month). Only issues CA certificates Public key of CA is used as trust anchor Benefits of Off-line CA Risk of key compromise is reduced: — Completely protected from network attacks — Can provide greater protection from local attacks since access to the CA is needed infrequently. — Other benefits? Option 1: On-line CA Risk: — Increased risk of compromise of root CA's key X On-line Root — If root CA's key is compromised, all CA relying parties who use CA as trust anchor must be notified out-of-band Benefit: — Out-of-band notification not required Subordinate Subordinate if a subordinate CA's key is CA 1 CA 2 compromised Option 2: Off-line CA Benefit: — Reduced risk of root CA key compromise Risk: Off-line Root — Out-of-band notification required if any CA subordinate (or cross-certified) CA's key is compromised X Subordinate Subordinate CA 1 CA 2 Option 3: Off-line CA with On-line CRL Issuer Benefit: — Reduced risk of root CA key compromise — Out-of-band notification not Off-line Root required if a subordinate CA's CA key is compromised X CRL signing Risk: key — Out-of-band notification required if CRL signing key is Subordinate Subordinate compromised CA 1 CA 2 — Path validation more complicated Does the use of an off-line root CA really improve security? Why are web security decisions hard and what can we do about it? Panel: Frank Hecker Amir Herzberg Sean Smith George Staikos Kelvin Yiu Moderator: Jason Holt Approximate Outline ● Part I: Defining the problem – Locks, logos and lingo – HTTP, HTTPS and redirects – Emailed links – Documentation – What's “security”? ● Part II: Emerging solutions ● Part III: Where should we be going? The Problem: Locks and Logos The Problem: Locks and Logos The Problem: Locks and Logos The Problem: Redirects The Problem: Redirects The Problem: Redirects The Problem: Redirects The Problem: Emailed Links The Problem: Emailed Links The Problem: Documentation The Problem: What's “security”? The Problem? ● Users have to make risk management decisions, but: – We don't know what's at risk – We don't know what's being used to protect it – We don't know how big the risks are – We don't know who to ask http://www.mountain-america.net ● http://www.mtnamerica.org ● Meaning of Certificates/Identity ● Current identity vetting procedures vary widely between CAs ● Certificates contents (ie: subject) are unclear in meaning and scope ● Stronger assurance guarantees should be applied for high profile targets – High assurance certificates project Proof of Identity ● Going to https://www.example.com/ only proves that you are connected to a site with CN that matches 7777 772e 6578 616d 706c 652e 636f 6d ● Maybe this should be the new Location: field! ● If certificate verification were bi-directional, proof that the server knows the user gives a stronger indication that the identity is that which was expected – Breaks down if the original server was compromised Usability... is hard! ● Uniform indicators across platforms help users get the right thing with all user-agents ● Uniform indicators make phishing easy ● When we add UI to the browser (chrome): – We are stuck with it – We confuse users – We help users – We add to information overload Usability and Content ● The interface should make it possible to access all information about the connection ● The interface should not force the user to make too many decisions or do too many actions ● The content should not be able to manipulate the chrome ● Again, breaks the Internet ● IN THE CONTENT REGION, ALL BETS ARE OFF Active Approaches To Security ● OCSP – Ability to shut down the bad guys ● Similarly should be applied to DNS ● Anti-phishing databases ● Great success for similar initiatives against SPAM ● Should be built into the browser, not an add-on ● Live content information in the chrome ● Co-ordinated active software updates ● What's the problem with all of these things? Emerging Solutions: TrustBar Emerging Solutions: TrustBar Thoughts on Browser UI Security Sean W. Smith Department of Computer Science Dartmouth College Hanover, NH 03755 www.cs.dartmouth.edu/~sws/ April 5, 2006 Vox Clamantis in Deserto 1. "What's your perspective?" Ye, Smith. "Trusted Paths for Browsers." USENIX Security, 2002. • demonstrated how malicious server content can convincingly simulate server-side SSL signals - IE/Windows, Netscape/Linux... and Geotrust • designed, prototyped, validated countermeasure, in Mozilla - not intruding on displayed content - not requiring user preparation or work • http://www.cs.dartmouth.edu/~pkilab/demos/spoofing/ - demos (obsolete) - code (obsolete) Vindication! • We kept hearing "But that's not a real problem, is it?" • Mozilla: touches too many modules, not a "bug fix" Vox Clamantis in Deserto 2. A Trusted Path is Not Enough Consider... • old case of https://palmstore vs. "Modus Media" • newer cases of 3rd party college recommendation letter gathering services. SSL Trust Root SSL Trust Root? linklings.com cs.dartmouth.edu ??? Page to upload "it's OK" Dartmouth letters A supporting PKI is conceivable, perhaps • but how should a browser render this? • and what happens when it gets messier (e.g., attestation) Vox Clamantis in Deserto 3. The Other Side of the Trusted Connection • When is your browser using your private key? • For what purpose? • Who else on your system is using it? Marchesini, Smith, Zhao. "Keyjacking: the Surprising Insecurity of Client-Side SSL." PKI 2003. • Client-side SSL == user approval? • Signed Web forms? • "What you see is not always what you sign" Kain Smith Asokan, Josang, 2002. Related anti-phishing work on "Secure Attention Keys" • TIPPI workshop Phishing and Counter-measures: Understanding the Increasing Problem of Electronic Identity Theft (M. Jakobsson and S.A. Myers, editors). John Wiley. To appear, 2006 Vox Clamantis in Deserto Safe Browsing for Dummies :-) Preventing Spoofing and Phishing by Secure Usability and Cryptography Wednesday, April 5, 2006 Amir Herzberg Computer Science Department, Bar Ilan University http://AmirHerzberg.com 29 SSL Certificate Validation ● Browsers `trust` list of CAs defined by vendor – Users seldom remove unknown/untrusted CAs ● Existing certs: limited validation of identity – Completely determined by CA – Inexpensive, `domain validated` certificates ● Fully automated: email challenge-response `check` ● Abused already in real attacks against banks ● TrustBar response: display `organization` from cert – `Domain validated` do not have organization name ● IE7 response: `extended validation` certificates 30 Extended Validation and Alternatives ● Extended validation certificates: – Stronger authentication (actuator/notary ?) – New, more expensive certificates – Again trying to impose (better) global quality ● Two (better?) alternatives: 1. Certificate validation service – user delegate trust (e.g. to Norton, …), not bundle trust with IE 2. Public-protest-period certificates 31 Single-Click Logon ● Idea: avoid entry of password by user – Cannot steal password if user does not enter it! ● Improved usability – Trivial to use: must click site identifier (logo) ● User cannot enter, submit password via site!! – Same button as `site identification widget` ● Support better authentication by sites – Improved efficiency, security (w/ weak passwords) ● Secure and convenient mobility – By proxy, device, paper 32 Authenticating without SSL? ● Efficiency (cf. SSL), content distribution network – Alternative to unprotected login pages... ● Authentication of displayed content, e.g. Webmail – Display in secure identification widget (TrustBar) – Trusted, certified `wrapper` (frame, scripts, ... ) – Alternative means to do `secure letterhead`? ● For validation of properties: PG13, no malware, ... 33 Default block mode ● Default block mode – Display only rated, signed content ● By rating agency – Invoked by clicking on special bookmark ● Ratings: – This script/executable does not contain malware – This image does not contain any logo or trademark – This page contains only content owned by Foo.com Inc. – This video is rated PG-13 ● Ensure correct ratings by reputation or penalties 34 CAUDIT PKI Federation Education sector in Australia while drawing on the experience gained while implementing this pilot project. A higher Education Sector Wide Approach 1 Introduction Dr Rodney McDuff The CAUDIT PKI Federation project is part of a larger effort from Australian Higher The University of Queensland Education Sector with support from Viviani Paz AusCERT, CAUDIT, Grangenet and the Australian government to develop an AusCERT environment in which Universities can collaborate at low cost and low risk to business-like institutions. Abstract Our aim is to develop and ultimately implement a PKI for CAUDIT universities Australian Higher Education Institutions, in (which includes universities in Australia, common with other research institutions New Zealand, Fiji and Papua New Guinea). around the world, need to collaborate with To achieve this goal we are working closely each other and with global research with other projects such as Meta Access partners. Cross-disciplinary research is also Management System Project (MAMS) and increasingly important between intra and Middleware Action Plan and Strategy inter-institutional groups and yet, (MAPS) and are taking a phased approach mechanisms for communication between to test interoperability and find out issues such groups are often insecure. Insecure regarding PKI enabled applications. communication methods are of particular concern for research because of the need to This phased approach has enabled us to protect intellectual property. receive support from a number of organizations and to promote extensive The deployment of PKI in the higher research in the proposed PKI architecture education sector in Australia has been and how it would perform in the higher measured. Taking this early stage of PKI education environment. adoption into consideration AusCERT in conjunction with CAUDIT has been working Further funding of $649,000 has recently on a Public Key Infrastructure (PKI) Project been awarded to the University of to establish a National Certificate Authority Queensland by the Hon Dr Brendan Nelson Framework for Australian and international MP, Minister for Education, Science and universities and research groups Training to develop an e Security interoperation. The first phase of this project Framework for Research which will enable a (called CAUDIT PKI Federation pilot) production PKI infrastructure to be built for included the development of policies and the sector using the architecture and policies guidelines, the implementation of a and procedures that have been developed in prototype certificate management system this pilot project. and preliminary research into interoperation The purpose of this follow on project is to issues. implement secure access, authentication The intent of this framework is to minimize and authorisation for researchers who PKI up taking costs, minimize surprises access services and infrastructure across once we move into a production global networks. This project seeks to environment and provide clear guidelines for establish an E-Security Framework to implementation to avoid retrofitting. integrate two types of security systems, PKI and Shibboleth, to foster collaboration and enable the secure sharing of resources and This paper will discuss the basic research infrastructure within Australia and implementation used and will look at some with international partners. The project will vital issues on how to enable secure leverage off existing work in both areas, interoperation amongst the Higher build on the advantages of these different 3 Certification Levels systems and create a platform to enable the secure sharing of resources for a research We believe that a fundamental issue for a infrastructure. successful PKI implementation is the identity of the end user (or entity) and the degree of identity checking and verification. CAUDIT 2 CAUDIT PKI Federation PKI Federation proposed: Architecture • Use several identity certification A given PKI can support a number of levels corresponding only to the services in an organisation. The CAUDIT strength of the identification process of PKI pilot implementation provided three core the end entity; rather than what they are services: or what they do within the institution. • Authentication – the assurance that the Each level will also correspond to a entity proves who they are (or claim to different signing private key for the be). appropriate CA. • Integrity - that data has not been • Base the identification process on the modified (intentionally or unintentionally) Australian 100 points of identity system in transit). (described in the Financial Transaction Reports Act 1988 and Financial • Confidentiality – the assurance of data Transaction Reports Regulations 1990) privacy. using a modified Form 201 that requires These services enable entities to completion and identification proof in the demonstrate they are who they claim to be, institutions’ RA’s presence. to be assured that data is not undetectably • Use four certification levels as modified, and to be certain that data sent to detailed below. another entity is only read by the intended entity. The default operating certification level, The CAUDIT PKI Federation has used a called Level 3, is granted once an end entity combination of trusted models to develop its has successfully accrued at least 100 points own operational model. It is comprised of a of identification. In most institutions, staff on single Root Certification Authority (CA), four its payroll should proffer a birth certificate or Subordinate CAs corresponding to each passport (70 points) on induction or have a level of certification and Institutions’ CAs. driver’s license (40 points) or a credit card The four Sub-CAs issue CA certificates to (35 points) and so will easily fall within this Institutions CAs within CAUDIT. Institutions level. Similar most students (and others within CAUDIT inherit the Certificate Policies within the institution’s circle) should be able and Certificate Practice Statement from the to proffer enough credentials to eventually Root CA and four Sub-CAs, or comply with be certified to Level 3. them. The trust model is described in detail on section 4. The following diagram illustrates the architecture chosen. CAUDIT PKI Federation Trust Model AusCERT Root CA AusCERT AusCERT AusCERT CALevel Level AusCERT CA CA Level 34 CA Level12 Institution 1 Institution 2 Institution 52 Institution 53 CA CA CA CA CA CA CA CA CA Level CA CA CA Level 34 Level CA Level 34 Level CA Level 34 Level CA Level 34 Old CA Level Level 1 Level Level 12 ... Level 12 Level Level 12 Self signed RA RA RA RA RA RA RA RA RA RA Policy Management Authority (PMA) It made sense to consider certification levels Certification Level 1 where end entities who both greater and lesser than Level 3. are still with the institution’s circle have not Certification Level 4 is used when there is a directly provided to the institution any need from relying parties for identification credentials at all. However these entities process greater that Level 3. For example should have provided identification consider a relying party that is a digital credentials to another body (not within the repository containing confidential and very CAUDIT PKI circle of trust), which has an sensitive intellectual property. That relying agreement of mutual trust with that party may insist that the end user have more institution. An example of this is the process than just 100 points of identifications but of enrolling new students into a university. should also have a recent background check In Australia state secondary education which indicates that this individual has no bodies transfer to the university enough prior history of intellectual property information about new prospective students violations. Information regarding the agency so that they can be enrolled and if executing the background checks and check necessary accounts created. However this type can be encoded into the end users information usually has not been vetted by certificate within a X.509 extension attribute. the university for veracity at this stage. The university trusts the state body that the Certification Level 2 encompasses end information provided is correct. entities that cannot for one reason or another provide enough credentials to meet the 100 points criteria. These users may still need a public certificate to access low risk resources where only the possession of a valid certificate is required. It would be discriminatory to deny these users access to these types of resources. The table below summarises the CAUDIT PKI Certification Levels. Certificate Description Level • No proactive identity check provided to the RA. • Identity information provided by a body that the RA has a trust relationship. Level 1 • Example: A student being enrolled in at least one subject is sufficient for the certificate issuing however identity information has only been supplied by QTAC (or similar state body). • Subject must provide proof of identity by appearing IN PERSON at the RA. • Individual cannot provide the required 100 points of identification. • Example: Short term contractors at an institution requiring access to PKI- Level 2 protected systems whose credentials are insufficient credentials to meet the 100 points check but can provide some credentials (e.g. drivers licence, credit card, etc). • Subject must provide proof of identity by appearing IN PERSON at the RA. • Individual must accrue at least 100 points of identity. Level 3 • Example: Foreign staff with valid passports and written references from acceptable referees. • Subject must provide the same information for Level 3 certification in addition to character background check. Level 4 • For example a positive check is also conducted by an appropriate external agency. CAs should not arbitrarily setup relationships 4 Trust Model as this weakens the chain of trust. Inference of trust must also be carefully handled. If A key benefit of PKI is the ability to construct CAA trusts CAB and CAB trusts CAC then the a “sense of trust” between a relying party inference that CAA trusts CAC is not and an end entity (whoever or whatever they necessarily correct all the time. may be). This sense of trust has several aspects ranging from the technological to CA certificate extension attributes (e.g. psychological. At both technological and nameConstraint and policyConstraint) can psychological level a “trusted” connection be used to correct faulty trust inference must be made between a trust anchor of the logic; however problems also occur if the relying party and a trust anchor of the end trust chain is too long including: entity. • Path processing - becomes more At a technology level, trust anchors are intensive for the relying party. normally either the CA that signed the end entities’ own certificate or a set of CAs that • Trust erosion - at each transition of the relying parties either explicitly trust or a link of the chain the erosion of that the relying parties’ software’s vendor trust is a possibility as the policies explicitly trusts. and procedures of each CA may not perfectly align to relying party Relying parties must attempt to construct expectations. The CA certificate either a direct or indirect path between the extension attribute presented end entity certificate and its own pathLengthConstraint can be used trust anchor. to mitigate this problem. This process is trivial when the relying party and end entity share the same trust anchor. 4.1 CAUDIT PKI Trust Model If the relying party and the end entity do not The CAUDIT PKI Federation is a share the same trust anchor, the relying combination of models: party must find a continuous chain of valid Commercial and appropriate CAs, starting from the end CA Chain entity’s CA, and terminating at its trust anchor. If this path cannot be constructed AusCERT and validated then the relying party must be Root CA alerted to the absence of trust. AusCERT AusCERT AusCERT CA Level 4 This process is called “Certificate Path AusCERT CALevel Level 3 CA Level 12 CA Processing” and it is a major function of any PKI. If the same CA signs all end entity certificates, Certificate Path Processing is Institution 1 Institution 2 Institution 52 Institution 53 trivial and requires limited consideration. CA CA CA CA AusCERT CA CA CA CA However reality is more complicated with CA Level34 CA Level34 CA Level34 CA Level34 Old CA PMA CA CA CA CA thousands of active CAs having complex Level Level Level 12 Level Level Level 12 ... Level Level Level 12 Level Level Level 12 (self-signed) and opaque relationships. RA RA RA RA RA RA RA RA RA RA For a relying party to transverse a chain link between two CAs (and therefore infer a level of trust between them), they must have Core CAUDIT PKI architecture - the previously setup a trust relationship between Hierarchical CA model provides good themselves; either by being a subordinate flexibility to the members of the CAUDIT PKI CA to the other or by (unilaterally or and a reasonably simple trust topology for bilaterally) cross-certifying themselves. Certificate Path Processing. • Trust anchor – AusCERT operates as 5 Additional Design the trust anchor for all the CAUDIT PKI Considerations due to existing trust relationships. AusCERT is seeking to have either its There are many other design considerations Root CA accepted into a broad range of to consider other than the identity vendors’ trust lists or to have its Root certification levels and the trust model. We CA signed by a well-known CA already briefly discuss some of these issues below in a broad range of vendors’ trust lists. that are organized in around the various stages of the typical management lifecycle • Subordinate CA certificates - from the of a certificate [ADAMS2003]; namely AusCERT Root CA certificate, there are initialisation, issuing and cancellation. subordinate AusCERT CA certificates for each Certification Level implemented. This allows AusCERT and 5.1 Initialisation Phase the CAUDIT PKI members more control This phase contains: over how PKI networking is achieved over the various Certification Levels by • Registering of the end entities; using various X.509 constraint • Generating of the key pairs; extensions. Each institution will also have a separate CA certificate • Creating certificates and distributing corresponding to each implemented to the end entities (possibly Certification Level chained back to the including private key distribution); corresponding subordinate AusCERT • Disseminating the public CA certificate. certificates for use by relying parties; • Established PKIs - institutions with an and established PKI will implement their part • Backup of the keys. of the above design and use it to sign new end entity certificates. End entities issued by the institution’s old PKI can be 5.1.1 Registration transferred to the new design by cross- Our identity registration method is based on certifying their old CA certificate to the the Australian “100 points of Identification” appropriate AusCERT subordinate CA system with credentials offered to a RA in certificate. This way these old end person. entities will still recognize the old CA as their trust root (and continue to function) This method scales well while the CAUDIT and relying parties elsewhere can PKI is small where RAs (used by end users construct a chain to them. to register) are distributed over various institutions and key organisational units. • PMA - as each member of the CAUDIT However it will become intractable when the PKI is its own self-contained CAUDIT PKI encompasses many end users. organisation, AusCERT acts as a Policy Management Authority (PMA) to help Consider a situation of mandatory issue of maintain the trust fabric by periodically personal certificate(s) for every student. This auditing the policies and procedures of situation will require bulk certificate creation each member. that will obviously comprise Certification Level 1, which is designed to handle this • Cross certification - the AusCERT type of situation. End users with a bulk Root CA Certificate will eventually be created certificate at Level 1 who require cross-certified to other PKI federations higher certification can present themselves (e.g. HEBCA and various GRID PKIs) to to an RA and have another certificate allow collaboration between parties at issued. To minimize this certificate national, international and global levels. promotion, Level 1 certification must be sufficient for normal use. Institutions are expected to employ a CMS At this stage we recommend generating capable of bulk key/certificate generation to signing key pairs on the user’s computer or prepare for large scale PKI deployment. crypto-token; however we also recognise this may be problematic for large scale PKI There are also issues regarding bulk production and there will be security issues creation of key pairs - particularly for to consider. We expect the onus be on the certificates used for signing and non- end user to ensure their signing key is repudiation. Typically the certificates for the appropriately backed up. key pair are generated on the end user’s computer or crypto-token. Key pair Encryption keys should be generated at generation by a third party implies either the RA or CA to enable automatic knowledge of the private key and will safe and secure archive. If an encryption weaken strength of non-repudiation. key must be created on user’s computer or crypto-token, the user must make all reasonable attempts to supply this key to the 5.1.2 Key Pair Generation institutions CA for archival purposes. Key generation can occur at the: • End user’s computer or crypto- 5.1.3 Certificate Creation and token; Key/Certificate Distribution • RA; or After generating a key pair, the public key must be securely transferred to the CA for • CA. placement in a certificate and signing by the Depending on the use of the key there are CA and the certificate relayed back to the factors that impact where it is generated. user. Issued certificates should be published in the institution’s directory so other users Although losing a signing private key is wanting to communicate with the user can inconvenient (as only its corresponding easily locate it. verification certificate is needed after signing data, the CA should hold a copy of this However if the key pair was generated at the certificate), it may be disastrous if a RA or CA, the private key must also be decryption private key is lost resulting in securely communicated to the end user. permanent loss of corporate data. This can be achieved using the X.509 PKI Certificate Management Protocol [RFC2510] If the signing private key is known to anyone or using Public key Cryptography Standard other than the end user then the (PKCS)7 [RFC2315] or 10 [RFC2986]. The requirement of non-repudiation (ie “to prove CMS employed by an institution should to the satisfaction of a third party that the support at least one of these standards. private key could not possibly have been used by anyone other that the owner of the Although the ideal situation is to store private key”) is compromised even if the private keys on a crypto-token (e.g. smart “other” is the CA itself. card that can be used for swiping and proximity but need a special reader, or USB The CAUDIT PKI will issue separate key which have the advantage of being keys/certificates for signing/non-repudiation, compatible with virtually all recent personal which can also be used for authentication computers) rather than an encrypted file on since at its core authentication with X.509 the computers hard drive, we acknowledge certificates relies on signing a challenge these devices may still be relatively from a party and returning it to be verified, expensive for a University environment. and encryption to end users. To ensure that each certificate is only used for its appropriate purpose the issuing CA should set the appropriate X.509 keyUsage attributes. We also recognise that if the whole of This option would also be relevant for CAUDIT and its encompassing staff and institutions planning or deploying web-based students are to eventually embrace the staff portfolio pages. CAUDIT PKI Federation, the CAUDIT PKI Privacy is a difficult aspect of certificate Federation must embrace crypto-token dissemination and it comes in two parts: technology. We recognise that the crypto- card option may impact various internal • Encoded information - policies regarding student and staff identity identification certificates contain cards. A workaround may be to deploy user information (e.g. name and crypto-cards in parallel to established email address) encoded in the identity cards. certificate; and the certificate is useless without it. However after the 5.1.4 Certificate Dissemination certificate is disseminated it cannot be recalled (only revoked) and can It is essentially important that the University remain in the public domain forever. community can readily find the certificates of There are schemes in which one put people they want to securely communicate either an anonym or pseudonym in with. Public certificates should be published the certificate (rather than the in the institution’s directory; however veronym) to protect privacy; although this aids intra-institution searches, however this approach virtually it does not aid inter-institution searches and cripples potential certificate use. ideally a single location to search for certificates for all of CAUDIT’s members is • Searching - privacy issues also required. arise by allowing everyone to browse and search the CAUDIT PKI One solution being investigated is for directories and web pages for AusCERT to run a “directory of directories” certificates. This issue is complex service or a directory proxy. A “directory of enough just within a single directories” is an LDAP directory populated institution. We suggest that CAUDIT only with referrals to other directories. The instigates a study of solutions to this searching application can follow the referrals problem across all its members. to the target directory and in some applications these hopes are in vain. Also it is difficult to instigate a search for an 5.1.5 Key Backup individual across several institutions. Key backup is a key issue and we A directory proxy service takes the request recommend backing up encryption keys at (re-writes the request if necessary) and creation by the institution’s CA. However, executes the search on the user’s behalf at this implies the institution’s CMS is capable various institutions’ directories. Results are of this function. Provided this process is re-written (if required), collated and returned secure, institutions are free to implement to the user. A simple web interface (e.g. their own procedures, which will regularly be similar to the EuroPKI interface) will allow audited by the CAUDIT PKI Federation greater accessibility. PMA. Another approach being investigated is To protect non-repudiation signing private using Google as a Web File (also called the keys should not be backed up by the “Public File”) as suggested by Peter institution at their creation; however we Gutmann [Gutmann04]. This approach recommend backing up and archiving of the embeds or links the user’s certificate to the signing public certificate. user’s personal web page. As this page Users should backup either of these keys contains the user’s name (and possibly a using an encrypted format and a strong pass picture) a Google search will easily locate phrase. the information. To encourage this AusCERT is looking into developing a simple CGI script with a URL that embeds an identifier for the user’s certificate that can be simply added to a personal web page. 5.2 Issued Phase • S/MIME enabled mail clients must be configured to embed certificate chains After a private key and its corresponding with the PKCS#7 MIME attachment. public certificate have been disseminated This way relying parties do not need to they enter the “issued” phase that includes: inspect individual certificates to locate • Retrieving the certificate from a the certificates to traverse the CAUDIT remote repository (where PKI hierarchy to its top. necessary) • All issued certificates must use the • Validating the certificate whenever following X.509 extension attributes: it is used o Authority Information Access • Recovering the private key id lost; Extension (AIA) to supply to the and relying party: • Updating the certificate prior to ƒ Location of certificate expiration. chains and cross- certificate pairs. 5.2.1 Certificate Retrieval ƒ Location of CRLs and OCSP responders Certificate Dissemination is the act of publishing public certificates for use by o CRL Distribution Points others. Certificate Retrieval is the Extension to supply to the complementary operation where a relying relying party party or end user retrieves the certificates ƒ Location of CRLs. from various repositories. The infrastructure for certificate retrieval is identical as that • All issued CA certificates and cross- required for certificate dissemination and we certificates must be published in either a make no further recommendation. X.500 or LDAP directories so that relying parties and DPP/DPV servers 5.2.2 Certificate Validation can locate them. If LDAP servers are used then a “Directory of Directories” or It is vitally important that any relying party Directory Proxy service will be can successfully perform Certificate Path necessary. Processing on certificates issued by CAs in the CAUDIT PKI Federation. Every effort • Institutions must publish regular and must be made to create and maintain the timely CRL information. If revocation list necessary infrastructure for achieving this grows large they should consider using goal while considering the following: CRL partitioning and Delta CRLs to minimise bandwidth. Institutions will be • AusCERT will either place its Root CA expected to run an OCSP responder. Certificate in trust lists for well known applications or have its “Root” CA • There must be a single point of CRL and certificate chained to a well known CA OCSP information for applications that certificate that already exists in the trust cannot discover their locations via lists in well known applications. information in the certificates. These services may be provided using Indirect • SSLv3/TLSv1-enabled servers must be and Redirect CRLs and OCSP proxy. configured to supply certificate chains to the relying party. This approach means relying parties do not need to inspect individual certificates to locate the certificates to traverse the CAUDIT PKI hierarchy to the top. 5.2.3 Key Recovery 5.2.6 Certificate Expiration End users will lose private key and forget The aim is to maximise the number of pass phrases protecting private keys. In this naturally expiring certificates and minimise situation, the RA or CA may need to retrieve the number of certificates that must be the key from the key archive and securely revoked (e.g. users leaving the CAUDIT transmit the key to the owner to prevent PKI, etc.). CAs should also aim to minimise permanent loss of information. We certificate renewals and updates. recommend institutions deploy a CMS For example, consider certificates issued to capable of key backup and recovery. students and the following options: 5.2.4 Key Update or Renewal • Issuing certificates on the 1st January valid for approximately one year - When a certificate is near to expiration and each year new students must be issued the end entity still needs a certificate, the CA with certificates and continuing students can either: must renew or update their certificates. • Renew the certificate – in this During the year the CA must track operation the user’s original public key is students permanently leaving and placed in a new certificate and issued revoke their certificates. However some back to the end user prior to certificate proportion of students graduate and expiration. This operation can be leave each year at or about when their automatically initiated by the CA prior to certificates naturally expire and require the end user’s certificate expiration; or no revocation. For this option the process of renewing or updating • Update the certificate – in this certificates for continuing students is an operation a new key pair is generated intensive task while the revocation of and a new certificate is issued. For this certificates has less impact. operation to take place the end user must send a certificate update request • Setting the student certificate validity to the RA. period to approximately 3 years - to coincide with the average university Institutions can select the best method for degree period. In this situation, new itself, its staff and students that provide a students are issued certificates as balance between security and convenience. normal and for a large majority as they Either way the end entity must be notified of graduate their certificates should be also the impending expiration in advance so they expiring. Certificates for the minority can initiate key update or renewal. For remaining longer than 3 years can be scalability issues, this process should be as renewed or updated for each extra year automated as possible and as transparent to at the institution. Certificates must still the end entity as possible. be revoked for students leaving before the three years. This option is lighter on 5.2.5 Cancellation Phase certificate renewal or update as compared to the previous option; This phase covers the natural expiration of a however it is heavier on the process of certificate (and revocation if required) in revocation. This option also creates addition to reissuing or renewing expired or CRLs that are significantly larger than expiring certificates. the previous Option. The cancellation phase also involves the Selecting an optimal validity period for staff records management task of maintaining a is more difficult due to irregular staff history of keying material so data encrypted employment terms. While some staff by now-expired certificates can be decrypted members have fixed term employment (and in the future (if required) as well as for therefore a predictable expiry date), the dispute resolution purposes. majority may leave the institution before their certificates expire naturally and therefore require revocation. We recommend institutions carefully select • Changing certificate information - validity periods and revocation policies that information in a certificate will inevitably best suit each institution needs. change (Certificate Perishability ) and it may become necessary to revoke that 5.2.7 Certificate Revocation certificate (and reissue another certificate) before the certificate naturally Under the CAUDIT PKI Federation expires. Examples of such changes certificates can be revoked for the following include name, email address or common reasons: affiliation changes. To counter this • Compromise of end entity’s private situation, institutions should minimise key - due to a stolen computer or the use of attributes with the potential to crypto-token or the computer upon change regularly (e.g. refraining from which the private key is held has been adding attributes in an ID certificate for comprised, the affected certificate authorisation purposes). Attribute should be revoked as soon as possible. certificates or access management It is the duty of the end entity to contact systems like Shibboleth are better suited the RA or CA immediately once they for this. realize the computer/crypto-token has been stolen or otherwise compromised. 5.2.8 Key History and Archive However the institution must publish We recommend institutions’ CAs should precise instructions to be followed in this archive all keying materials or encryption case. If the end entity has misplaced or certificates and the public certificate for lost the computer/crypto-token where signing certificates including renewed their private key(s) reside, they also certificates and updated key pairs. should contact the CA or RA as soon as possible to revoke the certificates. Archiving allows the institution to decrypt Authorized administrators must also be encrypted data when private keys are lost. able to initiate revocation if they suspect Also signed documents can still be verified compromise of a private key. in the future even when the user has updated or renewed their certificates and • Termination of institution association have removed or deleted the older versions. - most institutions are dynamic bodies with staff and students regularly entering and leaving the institution. End users will 6 Approach used inevitably terminate their employment We have developed a phased approach to and/or studies before natural certificate ensure that the production implementation is expiration. In this situation, certificates not only feasible, but also useful to each should also be revoked. Most institutions individual university. have well defined staff termination procedures and checklists that could be • Pilot Phase - extensive updated to include processes for research is being undertaken to revoking staff certificates; however understand interoperability students pose problems as they issues with PKI enabled generally have less well-defined applications that may arise in a procedures. production environment. • Pre-Production Phase – 7 Conclusion investigate inclusion of Root CA into web browsers certificate As we progress in the implementation of the authorities and compliance CAUDIT PKI Federation Project we face requirements to the appropriate technical and business challenges. Many FIPS. Investigate Higher applications do not cope with PKI as Education requirements for expected. We are looking into ways to scale authorization certificates CRL dissemination across all members of including short-lived CAUDIT PKI. We expect that existing authorization certificates. business processes will need to be re- Investigate alignment of evaluated and possibly new processes will Shibboleth into the CAUDIT PKI need to be in place before this project is Federation Trust fabric, which taken into production. will be performed in We have finalized the Pilot Phase in which collaboration with MAMS draft Certificate Police/Certificate Practice project. Statement have been developed and • Initial Production Phase – feedback sought from the participant deploy an environment that universities and other PKIs from around the enables Universities world. This phase also included the collaborative research in a safer development of a PKI test environment in manner. Empower Universities which CA certificates where issued to with the necessary information participant institutions that in turn issued end to train their users. user certificates. While these phases are very distinct they Preliminary interoperability tests included are also interconnected in a way that the encryption and signing of emails at a client results from one phase will impact and direct level, browser client authentication, online future phases. Using this phased approach validation of certificates, server side we hope to be able to map and document certificates and CRL and OSCP any technical and philosophical problems implementations. that may hinder a PKI implementation. At the time of writing this paper we have We understand that one of the major hurdles entered the Pre-production Phase in which of deploying a large PKI is not so much the we are further developing the draft CP/CPS technical intricacies of PKI enabled and pursuing the avenues to include the technology available to date, but the support Root CA into web browsers. We are from management and end users. investigating Higher Education requirements for authorization certificates including short- We all agree that PKI is not a simple lived authorization certificates and, in implementation and that end users may be collaboration with MAMS, we are exploring reluctant to accept and adopt new the alignment of Shibboleth into the CAUDIT technologies, however we hope to develop PKI Federation Trust fabric. an infrastructure that is as simple as possible to fit in with existing individual We are however optimistic that with the Universities infrastructures. continued support we have received from the CAUDIT universities participating in the Pilot Phase that we’ll be able to implement an efficient PKI solution across the higher education sector in Australia. Our phased approach has enabled us to receive support from a number of organizations, which keeps the momentum with the Higher Education Sector in Australia moving forward. References [ADAMS2003] C. Adams and S. Lloyd, Understanding PKI, Addison-Wesley, 2003 [ADAMS2004] C. Adams and M. Just “PKI: Ten years later” http://middleware.internet2.edu/pki04/proceedings/pki_ten_years.pdf [AS4539.1.2.1] AS 4539 Part 1.2.1 (2001) Information technology – Public Key Authentication Framework (PKAF) General – X.509 Certificate and Certificate Revocation List (CRL) profile. Standards Australia. [AS 4539.1.3] AS 4539 Part 1.3 (1999) General – Information technology – Public Key Authentication Framework (PKAF) - X.509 supported algorithms profile. Standards Australia. [DIFFIE] W. Diffie and M. Hellman, “New Directions in Cryptography”, IEEE Transactions on Information Theory, Vol 22, No 6, November 1976 [FBCA] Public X.509 Certification Practice Statement (CPS) For The Federal Bridge Certification Authority (FBCA) - http://www.cio.gov/fpkipa/documents/fbca_cps.pdf [FIPS 140-] Security Requirements for Cryptographic Modules, 1994-01 http://csrs.nist.gov/fips/fips1401.htm [HEBCA] X.509 Certificate Policy for the Higher Education Bridge Certification Autho (HEBCA) - http://www.educause.edu/ir/library/pdf/NET0309.pdf [KOHNFELDER] L. Kohnfelder, “Towards a Practical Public-key Cryptosystem”, MIT Thesis May 1978 [MUCA] Monash University Public Key Infrastructure: Certificate Practice Statement - http://www.its.monash.edu.au/security/certs/CPS_v1_1.doc [Gutmann04] P, Gutmann, How to build a PKI that works, 3rd Annual PKI R&D Workshop 2004 [PKCS#12] Personal Information Exchange Syntax Standard, April 1997. Http://www.rsa.com/rsalabs/pubs/PKCS/html/pkcs-12.html Internet X.509 Public Key Infrastructure - Certificate and CRL Profile [RFC 2459] http://www.ietf.org/rfc/rfc2459.txt [RFC 3280] Housley, et al. (2002) Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile. RFC 3280. IETF Network Workgroup – PKIX. http://www.ietf.org/rfc/rfc3280.txt [RFC 3647] Chokhani, et al. (2003) Internet X.509 Public Key Infrastructure Certificate Policy and Certification Practices Framework. RFC 3647. IETF Network Workgroup – PKIX. http://www.ietf.org/rfc/rfc3647.txt [VERISIGNCPS] VeriSign Certification Practice Statement - http://guardent.com/repository/CPS2.3/VeriSignCPS2.3.pdf Appendix A (“CAUDIT PKI Federation - A Higher Education Sector Wide Approach” could not be included here, due to encryption of the source PDF file.) 1 HSPD-12 Compliance: The Role of Federal PKI Judith Spencer Chair, Federal Identity Credentialing Office of Governmentwide Policy General Services Administration judith.spencer@gsa.gov 2 Genesis • July 2001 – Presidential commitment to moving E-Government forward • February 2002 – E-Authentication Initiative launched • April 2003 – CIO Council charters Federal Identity Credentialing Committee • December 2003 – E-Authentication Guidance to Federal Agencies issued • August 2004 – HSPD-12 Issued 3 PMC E-Government Agenda Government to Citizen Government to Business 1. USA Service 1. Federal Asset Sales 2. EZ Tax Filing 2. Online Rulemaking Management 3. Online Access for Loans 3. Simplified and Unified 4. Recreation One Stop Tax and Wage Reporting 5. Eligibility Assistance Online 4. Consolidated Health Informatics 5. Business Compliance 1 Stop 6. Int’l Trade Process Streamlining Government to Govt. Internal Effectiveness and Efficiency 1. e-Training 1. e-Vital (business case) 2. Recruitment One Stop 2. e-Grants 3. Enterprise HR Integration 3. Disaster Assistance and 4. e-Travel Crisis Response 5. e-Clearance 4. Geospatial Information One Stop 6. e-Payroll 5. Wireless Networks 7. Integrated Acquisition 8. e-Records Management 4 The Mandate Home Security Presidential Directive 12 (HSPD-12): “Policy for a Common Identification Standard for Federal Employees and Contractors” Dated: August 27, 2004 5 The Control Objectives Secure and reliable forms of personal identification that are: • Based on sound criteria to verify an individual employee’s identity • Strongly resistant to fraud, tampering, counterfeiting, and terrorist exploitation • Rapidly verified electronically • Issued only by providers whose reliability has been established by an official accreditation process 6 Applicability & Use • Applicable to all government organizations and contractors (except identification associated with National Security Systems) • Used for access to Federally-controlled facilities and logical access to Federally-controlled information systems • Flexible in selecting appropriate security level – includes graduated criteria from least secure to most secure • Implemented in a manner that protects citizens’ privacy 7 Sound Criteria to Verify an Individual Employee’s Identity Standardize the Identity Credential Issuance Process as follows: • Organization shall use an approved identity proofing and registration process including: ― Require two identity source documents in original form from the list associated with Form I-9, Employment Eligibility Verification. At least one document shall be a valid State or Federal government-issued picture identification ― National Agency Check with Written Inquiries (NACI) or equivalent. ― FBI National Criminal History Fingerprint Check completion before credential issuance. ― In-person appearance at least once before credential issuance • Controls must ensure that no single individual can authorize issuance of a PIV credential 8 Strongly resistant to fraud, tampering, counterfeiting, and terrorist exploitation Mandatory Electronic Data Optional Electronic Data: • All data from Topology • Asymmetric key pair and • PIN corresponding certificate for digital signatures • Cardholder Unique Identifier (CHUID) • PIV Authentication Data (asymmetric • Asymmetric key pair and key pair and corresponding PKI corresponding certificate for key management certificate) • Two biometric fingerprints • Asymmetric or symmetric card authentication keys for supporting confidentiality (encryption) • Additional biometrics • Minimum Cryptographic mechanisms specified in SP800-78. 9 FIPS-201 Requirements (Section 4.3) • The PIV Card has a single mandatory key and four types of optional keys: • + The PIV authentication key shall be an asymmetric private key supporting card authentication for an interoperable environment, and it is mandatory for each PIV Card. • + The card authentication key may be either a symmetric (secret) key or an asymmetric private key for physical access, and it is optional. • + The digital signature key is an asymmetric private key supporting document signing, and it is optional. • + The key management key is an asymmetric private key supporting key establishment and transport, and it is optional. This can also be used as an encryption key. • + The card management key is a symmetric key used for personalization and post- issuance activities, and it is optional. • All PIV cryptographic keys shall be generated within a FIPS 140-2 validated cryptomodule with overall validation at Level 2 or above. In addition to an overall validation of Level 2, the PIV Card shall provide Level 3 physical security to protect the PIV private keys in storage. 10 Determining Assurance Levels • E-Authentication Guidance for Federal Agencies, issued by the Office of Management & Budget, Dec. 16, 2003 — http://www.whitehouse.gov/omb/memoranda/fy04/m04-04.pdf — About identity authentication, not authorization or access control — Incorporates Standards for Security Categorization of Federal Information and Information Systems (FIPS-199) • NIST SP800-63: Recommendation for Electronic Authentication — Companion to OMB e-Authentication guidance — http://csrc.nist.gov/eauth — Covers conventional token based remote authentication 11 Assurance Levels M-04-04:E-Authentication Guidance for Federal Agencies OMB Guidance establishes 4 authentication assurance levels Level 1 Level 2 Level 3 Level 4 Little or no confidence Some confidence in High confidence in Very high confidence in asserted identity asserted identity asserted identity in the asserted identity Self-assertion On-line with out-of- In person proofing On-line, instant minimum records band verification for Record a biometric qualification – out-of- qualification Cryptographic Solution band follow-up Cryptographic Hardware Token solution 12 Maximum Potential Impacts Assurance Level Impact Profiles Potential Impact Categories for Authentication Errors 1 2 3 4 Inconvenience, distress or damage to standing Low Mod Mod High or reputation Financial loss or agency liability Low Mod Mod High Harm to agency programs or public interests N/A Low Mod High Unauthorized release of sensitive information N/A Low Mod High Personal Safety N/A N/A Low Mod High Civil or criminal violations N/A Low Mod High 13 Implementing PKI in accordance with FIPS-201 • X.509 Certificate Policy for the Federal Common Policy Framework – Provides minimum requirements for Federal agency implementation of PKI – Operates at FBCA Medium Assurance/E-Authentication Levels 3 and 4 – Cross-certified with the FBCA – Governing policy for the Shared PKI Service Provider program • Certified PKI Shared Service Provider Program – Evaluates services against the Common Policy Framework – Conducts Operational Capabilities Demonstrations – Populates Certified Provider List with service providers who meet published criteria – Agencies not operating an Enterprise PKI must buy PKI services from certified providers 14 Approved Shared Service Providers • Verisign, Inc • Cybertrust • Operational Research Consultants • USDA/National Finance Center • Agencies operating an Enterprise PKI cross-certified with the FBCA at Medium Assurance or higher are considered compliant with FIPS-201. • In January 2008, these Enterprise PKIs will start including the Common Policy OIDs in their certificates. 15 Acquisition Policy Strategy • Two new FAR Rules • FAR Case 2005-015 – Addresses HSPD-12 requirements – Interim rule issued end of CY-05 • FAR Case 2005-017 – Directs agencies to acquire only approved products – Interim Rule in Committee awaiting final approval • OMB Guidance designates GSA as the “executive agent for Government-wide acquisitions of information technology" for the products and services required by HSPD-12 • Acquisition services will be offered via GSA Schedule Contracts 16 For More Information • Supporting Publications — FIPS-201 – Personal Identity Verification for Federal Employees and Contractors — SP 800-73 – Interfaces for Personal Identity Verification — SP 800-76 – Biometric Data Specification for Personal Identity Verification — SP 800-78 – Recommendation for Cryptographic Algorithms and Key Sizes — SP 800-79 – Issuing Organization Accreditation Guideline — SP 800-85 – PIV Middleware and PIV Card Application Conformance Test Guidelines • NIST PIV Website (http://csrc.nist.gov/piv-project/) • Federal Identity Credentialing Website (http://www.cio.gov/ficc) Path Discovery and Validation Working Group David A. Cooper NIST April 6, 2006 What is the PD-Val WG? The PD-VAL WG is a working group of the Federal PKI Policy Authority. Its mission is to make recommendations to the Federal PKI (FPKI) community on infrastructure and desktop solutions that will facilitate bridge-enabled certificate validation. Recommendations are based on the applicant's test results received from the FPKI Lab. Meetings are open to both agency representatives and vendors. Meetings held about once a month. Accomplishments Developed functional requirements for Path Discovery and Validation Sent out RFI to invite vendors to share information about their products' path discovery and validation capabilities Established testing program to verify products' capabilities Established Qualified Validation List Path Validation Requirements NIST Recommendation for X.509 Path Validation — Establishes path validation requirements at multiple levels (e.g., Enterprise, Bridge-enabled) • Levels based on set of extensions that can be processed. — Specifies how to use the Public Key Interoperability Test Suite (PKITS) to verify a path validation module's capabilities — Applications that satisfy all requirements for Bridge-enabled level generally preferred. Path Discovery Requirements Path Discovery test suite (still under development) Currently includes tests at two levels of complexity: — Rudimentary: Path discovery in a hierarchy — Basic: Path discovery in a mesh with one bridge Products currently being tested at both levels Plans call for development of Intermediate and Advanced Levels Path Discovery Requirements At each level there are three distinct PKIs. PKIs differ in how intermediate certificates and CRLs can be located: — Directory: locate certificates and CRLs based on DNs in issuer and subject fields and cRLDistributionPoints extension. — LDAP URI: locate certificates and CRLs based on LDAP URIs in authorityInfoAccess, subjectInfoAccess, and cRLDistributionPoints extensions. — HTTP URI: locate certificates and CRLs based on HTTP URIs in authorityInfoAccess, subjectInfoAccess, and cRLDistributionPoints extensions. Current Federal PKI only supports Directory based location. Qualified Validation List Vendors submit information about their products' path discovery and validation capabilities PD-Val WG (government members only) review submission and decide whether product should be tested Government funded lab performs path discovery and validation testing and reports results to PD-Val WG (government members only) If results are deemed satisfactory, product is added to Qualified Validation List (QVL). — Synopsis of test results is posted for each product on list. Qualified Validation List Five vendors currently listed — Thee Web server plug-ins — One Delegated Path Validation Server/E-mail client plug-in — One Delegated Path Discovery Server/client toolkit Agencies should carefully review synopses Qualified Validation List Products are included on QVL solely based on functional testing of path discovery and validation capabilities Inclusion on QVL is not based on: — Performance or stress testing — Products' capabilities other than path discovery and validation — Ease of installation or use — Vendor support services — Cost — Etcetera Future Directions for PD-Val WG? Possible future work includes: — Add OCSP to test suite — Develop a profile of SCVP for DPV/DPD clients and servers Federal PKI Policy Authority Overview and Current Status Peter Alterman, Chair Mission • Created at the direction of the Federal CIO Council and operates pursuant to Federal CIO Council authority • Representatives of cross-certified federal agencies plus observers • Sets policy governing operation of the U.S. Federal PKI • Approves applicants for cross certification with the FBCA and Shared Service Providers • Point of Interaction for E-Authentication Federation credential providers offering PKI 2 Policy Authority Org. Chart Federal CIO Council E-Auth PMO FICC Policy Authority www.cio.gov/fpkipa FBCA Op Auth PD-Val WG Tech WG Cert Policy WG SSP WG •Charter •Bylaws •Criteria & Methodology Document •Policies 3 Simplified Diagram of Federal PKI Federal Bridge Common Policy CA CA Cross- Shared Certified Service gov Provider PKIs PKIs E-Gov C4 CA CAs (3) (Common Policy OID Cross- And root Certified Cert) External eAuth PKIs CSPs 4 Federal PKI Role in E-Authentication -Banks -Universities -Agency Apps -Etc. Levels 1 & Biz Levels 1 & SAML Assertions Rule 2 Online s , CA 2 CSPs F Apps & SDT Services ates rt ific Levels 3 & Levels 2,3 & l Ce 4 Online 4 CSPs i git a Apps & D CA ation Services B F ific Digital Certificates rt -Ce X Federal Agency PKIs Other Gov PKIs Commercial PKIs Bridges 5 Status • 13 Federal Entities Cross-certified • US Common Policy CA Cross-certified (SSPs) • 1 State PKI Cross-certified • 1 Commercial PKI Cross-certified • Engagement with E.U., Australia, Canada, UK, Asia PKI (Japan, Taiwan, Singapore) • Spawned 3 other bridge PKIs: – Higher Education (gasping prototype) – Aerospace Industry (production) – Pharmaceutical Industry (production) 6 2005 Accomplishments  Completed PKI Interoperability Project  Solved citizenship of trusted agents issue  Implemented one new LOA and 3 new policies  Cross-certified new PKIs, most recently Justice, Gov Printing Office, Wells Fargo Bank  Revised Audit Requirements  Developed Bylaws –expanded documentation and formalized processes  Developed and Adopted Methodology for B2B xcert  Implemented PD-Val test suite and certified four products/services  Prepared initial ISMS assessment of Policy Authority Processes 7 Current Implementation-Related Work  CertiPath Bridge xcert in process  USPS PKI xcert in process  DEA CSOS PKI xcert in process  Boeing PKI xcert in process  Engaged Adobe PKI - exploratory  Develop and implement cert validation service with eAuthentication  Absorbed Shared Service Provider Work Group from FICC 8 Current Policy-Related Work  Developing audit guidelines for non-federal PKIs  Implementing Service Agreement with eAuthentication  Advisory on Rewrite of eAuthentication business and operating rules  Developing an ISO-compliant ISMS Plan for Operational Authority (ISO/IEC 27001 & 17799)  Harmonizing FIPS 201 requirements and preparing for HSPD-12 service demands  Harmonizing CP with EU QCP 9 Outreach Sponsor 2nd PKI Implementation Workshop Meetings with ETSI, UTex PKI Federation, Aussies, Internet2, EDUCAUSE, more Aiming for the Grids but so far just tentative feelers 10 Resources • www.cio.gov/fpkipa • www.cio.gov/fbca • www.cio.gov/ficc • www.cio.gov/eauthentication 11 I-CIDM Bridge to Bridge Interoperations April 6, 2006 Debb Blanchard Cybertrust Agenda Origins of the BBWG Purpose of the BBWG Bridge Certification Authority Participants Organization Participants Identification of Working Groups Top 10 Issues ©2005 Cybertrust. All rights reserved. www.cybertrust.com 2 Origins of the BBWG BCAs knew (kinda) how to bring other CAs within their own community of interest “into the fold” or cross-certify them  Policy mapping  Criteria and Methodology  User base  Business case  Operational and technical interoperability BBWG started its foundation to identify issues as they pertained and impacted the Federal Bridge Certification Authority (FBCA) and attempted cross-certification with other BCAs, e.g., HEBCA, SAFE, etc. As issues were uncovered, it was noticed that the issues for the FBCA were not necessarily unique to the FBCA Group evolved to include representatives from four Bridge Certification Authority (BCA) environments and expanded to include international representation ©2005 Cybertrust. All rights reserved. www.cybertrust.com 3 Purpose of the BBWG To address the implications of Bridge-to-Bridge cross- certification in the collaborative cross-organizational space International focus PKI-centric BBWG would not delve into corporate business models and practices that may be considered proprietary. ©2005 Cybertrust. All rights reserved. www.cybertrust.com 4 Bridge Certification Authority (BCA) Participants Federal Bridge Certification Authority (FBCA - US Government agencies, state governments, foreign governments) Higher Education Certification Authority (HEBCA – US higher education community with plans to include research institutions and higher education facilities from the EU) Secure Access for Everyone (SAFE – Pharmaceutical community led by Johnson&Johnson) Certipath (Exostar, Arinc, SITA with additional representation from Boeing, Lockheed Martin, Northrup Grumman, EADS/Airbus, tScheme, TSCP, EDS/Rolls-Royce) ©2005 Cybertrust. All rights reserved. www.cybertrust.com 5 Organization Participants Arinc/Certipath Internet2 Cybertrust Johnson&Johnson Boeing Corporation KPMG Dartmouth College Lockheed Martin Duke University National Institutes of Health Department of Defense National Institutes for Standards and EADS/Rolls-Royce Technology EDUCAUSE Northrop Grumman Enspier Technologies Orion Security Evincible/Certipath SITA Exostar/Certipath tScheme General Services Administration UKCEB TF/TSCP IBM ©2005 Cybertrust. All rights reserved. www.cybertrust.com 6 Areas of Investigation (per the Charter) Institutionalization of standards and the suitable body/ies to own and maintain them Role of governments in governance and management of the intra-bridge environment Stimulate the development of commercial products that are “bridge aware” Need for a governance structure between cross-certified BCAs and, if so, what should it be Legal implications and shaping a legal framework that satisfies trust requirements and meets business needs, including liability ©2005 Cybertrust. All rights reserved. www.cybertrust.com 7 Areas of Investigation (per the group) Policy Mapping to determine levels of assurance (LOA) Must have a common lexicon, terminology and documents mapping for the Charter and all the documents Compliance with open standards Audit standards for BCA operations and certifications needed for the Auditors Liability and legal issues BCA Operations ©2005 Cybertrust. All rights reserved. www.cybertrust.com 8 Work Scope of the Group  BCA interoperability vs Federation interoperability  Aren’t these the same under a different language?  BCA = PKI  Federation = multiple schemes, including PKI  Current Federation interoperability guidelines using BCA cross- certification as its basis  Dependencies and assumptions of other groups mentioned but not to be addressed within the confines of the BBWG, e.g., requirements for identity proofing/vetting and technical issues will not be addressed by this group.  BBWG will only address policy as it pertains to PKI and Bridge-to- Bridge policy issues; other decisions made are:  Identity Proofing and Vetting – These issues need to be addressed, but not by this group. We recommended that the I-CIDM create another working group to address these issues.  Implementation Challenges – to be addressed by the Technical Working Group. ©2005 Cybertrust. All rights reserved. www.cybertrust.com 9 Identification of Working Groups Each issue will be addressed by members of the following BCA communities:  Higher Education Bridge community  SAFE (Pharmaceutical) bridge community  FBCA and bridge government community (includes NIST and DoD)  Commercial Aerospace (Certipath, Boeing, Lockheed Martin, Northrop Grumman) ©2005 Cybertrust. All rights reserved. www.cybertrust.com 10 Top 10 Issues 1. Policy Mapping 2. Common lexicon, terminology and documents 3. Compliance with open standards 4. Audit standards for BCA operations and certifications needed for the Auditors 5. Liability and legal issues 6. BCA Operations 7. Identity vetting => moved to a Identity Proofing & Vetting workgroup 8. Path discovery & validation => moved to Technical workgroup 9. Distinguished names and name space => moved to Technical workgroup 10. Directory services => moved to Technical workgroup ©2005 Cybertrust. All rights reserved. www.cybertrust.com 11 Policy Mapping and Methodology Issue: A mutually agreed-upon methodology for cross-certifying BCAs to allow them to interoperate  Identify the framework of documents and requirements (similar to the CP/CPS RFC) that are needed by a Bridge entity to qualify for cross certification. For example the Bridge has to specify the Cross certification criterion and methodology document.  What is this document supposed to contain (rationale-- not example)?  What other documents does the Bridge Operator have to develop in addition to the standard CP/CPS? Is there a standard set?  What about the charter and structure of the Bridge Operators – Policy Authority, Operational Authority – and organization of these organizations? ©2005 Cybertrust. All rights reserved. www.cybertrust.com 12 Policy Mapping & Methodology - Results Documentation necessary when cross-certifying with other BCA’s:  Bona fides  CP and CPS  The mapping methodology used by the Policy Authority of the BCA to determine the requirements of the Primary CAs that comprise the BCA; may include • The rules of operation • The requirements for membership • Interoperability for the BCA  Charter of Rules or Charter Disclosure Statement  Audit results ©2005 Cybertrust. All rights reserved. www.cybertrust.com 13 Charter Disclosure Statement Determines the rules and business procedures under which a BCA operates. Should identify:  Purpose of the BCA  Organizational structure of the BCA including separation of operational and policy responsibilities  Liability framework  Policy authority and governance structure  Contract infrastructure, e.g., relying party obligations and subscriber agreements, insurance policy etc.  General operational environment, i.e., the communities of interest in which the applicant BCA participates either directly or indirectly. ©2005 Cybertrust. All rights reserved. www.cybertrust.com 14 Governance and BCA Charter Governance of the BCA should address how it does business and how it is governed Need to identify and create a standard way of auditing a non-standard document, such as the specialized BCA charter New standards may be needed Issues to be addressed (not limited to):  If a PCA leaves a BCA, what is the notification process of other BCAs and PCAs – especially for certificate path processing  Dispute resolution included in the MOA with specifics to address how a BCA does business to notify others  The perceived need for entities to have visibility into the CPSs and audit results of specific PKIs beyond their BCA domain. ©2005 Cybertrust. All rights reserved. www.cybertrust.com 15 Common terminology, definitions and lexicon Issue: Need for a common criteria and a lexicon (Common language of business) for grammar, syntax, etc.  Includes the definition and contents of documents as well.  Includes liability  Mapped international terms, grammar, syntax, etc as well Terms were synthesized from multiple sources, e.g., EAP, FBCA CP, Boeing Security, ISO, American Bar Association, RFC, so that only one term was accepted by the group Complete as of 12/17/2004 for this living document Liability terms were not addressed in this document Contents of other documents are discussed separately ©2005 Cybertrust. All rights reserved. www.cybertrust.com 16 Open Standards & Compliance Issue: Standards for BCA must rely upon open standards and not proprietary standards  Must include international standards  Since PKI-centric in nature, standards should apply to PKI standards. However, other standards may be included (or created.) Verify that the bridges are working with open standards. The framework should show how these standards fit together via a mapping between US standards and international standards as well as to perform a gap analysis on these standards. This activity is linked to technical working group. A first draft has been provided to a sub-group of the BBWG, which includes US standards, however, international standards need to be incorporated. ©2005 Cybertrust. All rights reserved. www.cybertrust.com 17 Audit Standards and Certifications Issue: How do we know that a BCA is operating at a level that can be trusted?  What certifications on placed upon the auditors to ensure their qualifications and competence to perform the task? Independence of the auditors to the organization and CP/CPS?  What are the audit standards for Bridge-to-Bridge?  What is examined and to what degree of rigueur?  What documents are needed to support the auditors and what does the auditor give to the BCA operations, e.g., certificate of approval? Documents to support the audit:  CP and CPS  Operating Procedures  Security Procedures  Charter Disclosure Statement  Business purpose of the BCA  Contracts, MOUs, and MOAs with its community members  Mapping methodology  Documents similar to FIPS 200 and SP800-53 (minimum security requirements and controls) ©2005 Cybertrust. All rights reserved. www.cybertrust.com 18 Audit Standards and Certifications The third-party evaluation of the BCA operations This is equivalent to the evaluation of a member PKI’s operations during intra-domain BCA cross-certification. A key issue to address during this step is what attestation standard was used by the third party.  American Institute of Certified Public Accountants (AICPA) / Canadian Institute for Chartered Accountants (CICA) Web Trust for Program Certification Authorities (WTCA) versus the tScheme or British Standard 17799 (or follow-on ISO 27001, and 27002) methodologies.  The reviewing BCA PA will have to decide whether the third-party review is comparable with its own third-party attestation ©2005 Cybertrust. All rights reserved. www.cybertrust.com 19 Liability and Legal Issues Issue: What are the liability and legal implications for:  Operating a BCA?  The contractual mechanism between BCAs?  Indemnification?  Limits on liability?  Others? ©2005 Cybertrust. All rights reserved. www.cybertrust.com 20 BCA Operations Issue: Requirements of some of the BCA CPs have internal requirements in order to cross-certify with other CAs or BCAs, e.g., originally, the FBCA required operators of other CAs – and by extension BCAs - for cross- certification to be operated by US citizens. Lots of discussion (sometimes very lively!) to address requirements for BCA operators, including definitions of:  Trustworthiness  Loyalty  Integrity ©2005 Cybertrust. All rights reserved. www.cybertrust.com 21 BCA Operations – Citizenship & Trusted Roles FBCA created new policies to include  Medium Assurance HW  Medium Assurance CBP (commercial best practice)  Medium Assurance HW CBP (commercial best practice) Re-defined requirements for trustworthiness, loyalty and integrity, and all four medium policies will have these identical requirements.  Section 5.3.1, Background, qualifications, experience, and security clearance requirements, “…All persons filling trusted roles shall be selected on the basis of loyalty, trustworthiness, and integrity... “  Section 5.3.1, Background, qualifications, experience, and security clearance requirements, “…Entity CA personnel shall, at a minimum, pass a background investigation covering the following areas: • Employment; • Education; • Place of residence; • Law Enforcement; and • References.  Section 5.3.1, Background, qualifications, experience, and security clearance requirements, “The period of investigation must cover at least the last five years for each area, excepting the residence check which must cover at least the last three years. Regardless of the date of the award, the highest educational degree shall be verified.” Practice Note for nongovernmental partners: The qualifications of the adjudication authority and procedures utilized to satisfy these requirements must be demonstrated before cross certification with the FBCA ©2005 Cybertrust. All rights reserved. www.cybertrust.com 22 BCA Operations – Citizenship & Trusted Roles FBCA current medium and new medium hardware includes language that addresses the citizenship requirements for CAs run in foreign countries and CAs run by multinational entities. Note: this language will NOT be in medium-cbp or medium hardware-cbp, which are citizenship-blind policies. FBCA citizenship requirements for trusted roles are no longer required for Basic and Rudimentary trust levels No requirement for High Assurance-CBP policy  EAuthentication initiative has defined medium hardware (and the proposed medium hardware-cbp) as satisfying the requirements for EAuthentication Level 4 (highest level) for all eGov applications.  In practice no external entity will ever be required to have a high assurance certificate to do business with an eGov application.  This decision may be revisited, and any PKI, or bridge, may run at high assurance without cross-certifying with the Federal Bridge at high assurance. For example, if FBCA cross-certifies with SAFE at medium hardware-cbp, any PKI cross-certified with SAFE at that LOA or better would see its credentials accepted by any eGov application, all the way up to Level 4, the highest FBCA reserve high assurance cross-certification for government PKIs only ©2005 Cybertrust. All rights reserved. www.cybertrust.com 23 FPKI to E-Authentication Federal Common E-Authentication High Federal Bridge CA Policy CA MediumHW Level 4 MediumHW-CBP Medium Medium-CBP Level 3 Basic Citizen and Commerce Class Policy CA Rudimentary Level 2 Level 1 E-Authentication Governance CAs (slide compliments of Judy Spencer, FICC chairperson) ©2005 Cybertrust. All rights reserved. www.cybertrust.com 24 The World According to FBCA Federal Agencies Defense NASA External Organizations USDA/NFC Treasury Industry Bridges USPTO State (SAFE, HEBCA, Energy Certipath) Homeland Security Allied Governments Federal Common State Government E-Authentication Policy (SSP) Level 3 & 4 ACES Interface to E-Authentication Architecture via PKI-SAML Conversion (slide compliments of Judy Spencer, FICC chairperson) ©2005 Cybertrust. All rights reserved. www.cybertrust.com 25 Current Status FPKI Policy Authority adopted a methodology for cross-certifying with another PKI Bridge – “Federal PKI Criteria and Methodology, Part Three”  Calls for mutual agreement on terms of engagement;  Recommends the following: • Mutual evaluation of bona fides (Charter, legal standing) • Mutual evaluation of business operational processes • Mutual CP mapping • Mutual technical interoperability testing • Signing of Memorandum of Understanding  Constrains paths to include no more than two bridges (limits transitivity) for present;  And lists a series of questions that need to be answered satisfactorily. FBCA and CertiPath Bridge CA nearing successful completion of cross- certification (April 2006) ©2005 Cybertrust. All rights reserved. www.cybertrust.com 26 Summary BCA Cross-certification is still an evolving process  As we become more adept the process will become more defined  The paper trail is one part of the process. In-person meetings will still be important to understand and comprehend intent and business of a BCA  Laws and regulations may restrict some goals for cross-certification  Legal and liability issues will probably never be completely resolved due to the nature of the legal community Did the BBWG meet its goals?  Still work to do  Certipath is almost complete  SAFE is beginning its process ©2005 Cybertrust. All rights reserved. www.cybertrust.com 27 For more information Dr. Peter Alterman, Chair, FPKI Policy Authority (FPKI PA) altermap@nih.gov 301-496-7998 Ms. Judith Spencer, Chair, Federal Identity Credentialing Committee (FICC) Judith.spencer@gsa.gov 202-208-6576 Ms. Deborah “Debb” Blanchard Deborah.blanchard@cybertrust.com 443-367-7011 ©2005 Cybertrust. All rights reserved. www.cybertrust.com 28 Bridge  Bridge Interoperability: Technical Consideration Santosh Chokhani (chokhani@orionsec.com) Slide 1 Outline of Presentation • Performing Cross Certification Securely • Bilateral • Bridge • Bridge  Bridge • Path Discovery and Path Validation Challenges • OCSP Considerations • SCVP Considerations • Practical Considerations • Impact on Certificate Policies • Summary Slide 2 Cross Certification: Bilateral • Scope of this presentation limited to technical topics – e.g., policy equivalency mapping not addressed • Use nameConstraints extension to ensure that the relying parties in your domain only trust certificates issued to the names appropriate for the cross certified domain • Set inhibitPolicyMapping, skipCerts = 0 so that you do not trust other domains cross certified by the “cross-certified domain” – If you want to trust those other domains, you will cross certify with them. In other words, trust is bilateral, like other business relationships. • Applies to Enterprise, Bridge and BB Environments also – Need a strategy for policy assertion. Examples: • PKI asserts all lower policies also • Cross certificate maps a low policy to all higher policies also • Applications include all higher policies in acceptable policy set Slide 3 Cross Certification: Bridge • Bridge uses permittedSubtrees field in nameConstraints extension to allocate name spaces to PCA domains appropriately • PCA sets inhibitPolicyMapping, skipCerts = 1 so that Bridge can map to other domains, but other domains can not – What if Bridge  Bridge link is taken? – What if the old idea of Bridge membrane becomes reality? • Bridge sets inhibitPolicyMapping, skipCerts = 0 in PCA certificates Slide 4 PKI Trust Model: Bridge PCA PCA PCA PCA Bridge CA PCA PCA Slide 5 Cross Certification: Bridge  Bridge • Bridges may not be able to use nameConstraints extension to allocate name spaces to other Bridges – Too many disjoint name spaces • Bridges can ensure bilateral Bridge  Bridge interoperability by: – Using excludedSubtrees that asserts names of all other Bridges in a Bridge certificate – By asserting inhibitPolicyMapping, skipCerts = 1 in Bridge certificates • PCA sets inhibitPolicyMapping, skipCerts = 2 so that Bridge can map to other Bridges – May not be as useful since Bridges can be trusted to do this correctly • Bridge sets inhibitPolicyMapping, skipCerts = 0 in PCA certificates • Bridge sets inhibitPolicyMapping, skipCerts = 1 in Bridge certificates Slide 6 PKI Trust Model: Bridge  Bridge PCA PCA PCA SBCA PCA FBCA EBCA PCA CBCA PCA Slide 7 Inhibit Policy Mapping Examples skipCerts = 0 PCA m PCA n skipCerts = 1 PCA n Bridge PCA m CA PCA n Bridge Bridge PCA m skipCerts = 2 CA 1 CA 2 Rely on the Bridges to set skipCerts = 0 on outgoing arcs to the PCAs Slide 8 Certification Path Discovery Challenges • See the Internet Informational RFC 4158 • Using DNS redirect, publish the following in your domain – “Bridge CA certificates issued by you only” in the Bridge p7c file and/or in the Bridge CA directory entry – Bridge CA Certificate depending on which Bridge you are cross certified with (in p7c and/or in the Bridge CA directory entry) • If your domain is cross certified by a Bridge, only publish certificate issued by you and no other Bridges or PCAs • Else, only publish the certificate issued by the Bridge you are cross certified with • In other words – For I = 1 to n, BridgeI p7c/cACertificate = Your PCA  BridgeI or BridgeI p7c/cACertificate = BridgeX such that BridgeX  Your PCA is not null – These measures will help select the path to your PCA only and that is what you want Slide 9 Certification Path Validation Challenges • No more than other environments • Same rules apply • More on commercial product limitations under “Practical Considerations” Slide 10 OCSP Considerations • Local policy model (e.g., trust anchor) approach does not scale well for Bridge environment – Need to use Delegated or CA model – Or use CRL and not OCSP – SAFE requires OCSP Slide 11 SCVP Considerations • No more than other environments • SCVP Server must be able to build and verify paths for various trust models Slide 12 Practical Considerations • Limitations of commercial products in terms of certification path development – Some require the use of AIA caIssuers field – Some Browsers unduly build paths to roots sent by a Server • Implies you can not build paths and hence authenticate yourself across a Bridge • Limitations of commercial products in terms of certification path validation – Some of the most commonly used products do not pass many of the PKITS tests, specially in the area of name constraints and policy processing – Need to push the vendors to comply with RFC 3280 and pass PKITS or PD-VAL tests – CAPI behavior if two or more trust anchors from Bridge environment are in the trust store • MSFT aware and very responsive Slide 13 Practical Considerations • Shared Service providers list of enumerable name spaces for assertion in nameConstraints extension may be too long – Alternative One: Use name subordination using Shared Service Provider CA name – Alternative Two: Do all of the following • PCA issues CA certificates with pathLengthConstraint = 0 • CA names are tracked or assigned using some method for the benefit of all Bridges to procedurally ensure that CA names do not collide • Use CA software controls to define name spaces for which the CA issues certificates • CA ensures that names assigned to an organization are appropriate for the organization Slide 14 Impact on Certificate Policy • Bridge CP should address PCA Domain (also known as Entity) PKI requirements – This is addressed unevenly by the current Bridge CPs • Address the shared service provider CA name space and path length requirements Slide 15 Summary • Rely on the Bridge to assert inhibitPolicyMapping, skipCerts =0 for PCA certificates • Rely on nameConstraints whenever possible • Assert names of other Bridges in excludedSubtrees field of Bridge  Bridge certificate • Press PK enablement toolkits and product vendors to comply with RFC 3280 and PD-VAL • Beef up Bridge CP requirements to address Entity PKI requirements • Name uniqueness is important – Have a strategy for PCA name space coordination – Have a strategy for shared service provider CA name space coordination if name constraints are not imposed on shared service provider CAs • Have a stretagy for policy assertions • Have a strategy for OCSP interoperability • DNS redirect for AIA or LDAP entries helps immensely with computational complexity of certification path discovery Slide 16 Questions Slide 17 PKI Federations in Higher Education NIST PKI R&D Workshop #5, April 4-6 2006, Gaithersburg MD Contents • Overview of PKI in Higher Education • HEBCA • Challenges and Opportunities 2 Overview • 5 Potential Killer Apps for PKI in Higher Education – S/MIME – Paperless Office workflow – Shibboleth – GRID Computing Enabled for Federations – E-grants facilitation 3 Overview • PKI Initiatives in US Higher Education Community – HEBCA (Higher Education Bridge Certificate Authority) – USHER (US Higher Education Root) – InCommon – Grid based PKIs – Campus based PKIs 4 Overview Higher Education Bridge Certificate Authority - HEBCA • HEBCA facilitates a trust fabric across all of US Higher Education so that credentials issued by participating institutions can be used (and trusted) globally e.g. signed and/or encrypted email, digitally signed documents (paperless office), etc can all be trusted inter-institutionally and not just intra-institutionally • Extensions to the Higher Education trust infrastructure into external federations is also possible and proof of concept work with the FBCA (via BCA cross- certification) has demonstrated this inter-federation trust extension • Single credential accepted globally • Uses Levels of Assurance to indicate strength of Identification and Authentication procedures, audit/separation of duty requirements, and key protection measures • Potential for stronger authentication and possibly authorization of participants in grid based applications 5 Overview United States Higher Education Root – USHER • USHER is a public key infrastructure (PKI) supported by the higher education community to facilitate emerging deployments in research, education, and transactions in higher education that require PKI and allows subscribers to base PKI applications and services in a common root with peers and collaborative partners • USHER is the Trusted Root of a hierarchical PKI for US Higher Education – the root only signs subordinate CA certificates, and the service is designed to bootstrap institutional PKIs by providing policy infrastructure and a CA • USHER Foundation is the first service offered and is designed to be a broadly adoptable PKI with easy implementation by leveraging most existing campus identity practices • USHER Foundation does not audit or in any other way validate the policy or practice that a subscriber uses to issue certificate credentials to its users, instead, USHER has developed a set of Expected Practices for campus CA operators to consider • Other USHER services are anticipated with stronger levels of assurance and auditable policies 6 Overview InCommon • The mission of the InCommon Federation is to create and support a common framework for trustworthy shared management of access to on-line resources in support of education and research in the United States. • InCommon will facilitate development of a community-based common trust fabric sufficient to enable participants to make appropriate decisions about access control information provided to them by other participants • InCommon is intended to enable production-level end-user access to a wide variety of protected resources and uses Shibboleth® as its federating software • InCommon® eliminates the need for researchers, students, and educators to maintain multiple, password-protected accounts • Although this system is assertion based, there is still a need for PKI credentials to protect the server infrastructure, and PKI can also be used as the authentication mechanism. 7 Overview Grid based PKIs • Some higher education institutions operate production level Grid CAs approved by TAGPMA – TeraGrid (Illinois, Purdue) – Open Science Grid (California) – Texas High Energy Grid (Texas) – San Diego Supercomputing Center • Many institutions run experimental grid CAs to investigate the potential of this activity – Dartmouth College – University of Virginia – … – … 8 Overview Campus PKIs • Managed PKIs from Commercial vendors – CA operations outsourced to vendor • CyberTrust • DST/Identrus • GeoTrust • VeriSign – Vendor based Policy – Local RAs • Internal Campus PKI operations – CA & RA operations run on campus – Campus based Policy • EDUCAUSE has programs for reducing cost through Identity Management Services Program – http://www.educause.edu/IMSP • Open Source options e.g. OpenCA, CA-in-a-box, etc. etc. 9 HEBCA : Higher Education Bridge Certificate Authority • Bridge Certificate Authority for US Higher Education • Modeled on FBCA • Provides cross-certification between the subscribing institution and the HEBCA root CA • Flexible policy implementations through the mapping process • The HEBCA root CA and infrastructure hosted at Dartmouth College • Facilitates inter-institutional trust between participating schools • Facilitates inter-federation trust between US Higher Education community and external entities 10 HEBCA Project • What will it provide? – The HEBCA Project will create and maintain three new Certificate Authority (CA) systems for EDUCAUSE and will also house the existing HEBCA Prototype CA – The three CA systems to be created are: • HEBCA Test CA • HEBCA Development CA • HEBCA Production CA – The HEBCAs will be used to cross-certify Higher Education PKI trust anchors to create a bridged trust network – The HEBCA Test CA will also be cross-certified with the Prototype FBCA (other emerging Bridge CAs are also targets) and the HEBCA production CAs will be cross-certified with the production FBCA. 11 HEBCA Project - Overview FBCA PA and CP oversite HEBCA PA and CP oversite FBCA Infrastructure CA HEBCA CA Infrastructure Root Root Cert Cert FBCA HEBCA Directory Directory Root Cross Cross Cross Cross Cert Cross Cross Cross Cross CertPair CertPair CertPair CertPair CRLs CertPair CertPair CertPair CertPair X.500 DSP Protocol Root (Chaining Cert Agreements) between CRLs FBCA and Cross Certified PKI provider ROD FBCA University 1 University 2 Referral Referral Referral Other Cross HEBCA PKI DST ACES PKI FBCA PKI University 1 PKI University 2 PKI Certified PKI Other Cross Other Cross CA CA CA CA CA CA Certified PKIs Certified PKIs Root Root Root Root Root Root Cert Cert Cert Cert Cert Cert Border Dir Border Dir Border Dir Border Dir Border Dir Border Dir Cross Cross Cross Cross Cross Cross CertPair CertPair CertPair CertPair CertPair CertPair CRLs CRLs CRLs CRLs CRLs CRLs LDAP Based Directory X.500 Based Directory Utilizing the Registry of Directories Directories Interconnect via Chaining (X.500 DSP) Utilizing LDAP Referrals 12 HEBCA Policy Authority  The HEBCA PA establishes policy for and oversees operation of the HEBCA. HEBCA PA activities include… • approve and certify the Certificate Policy (CP) and Certification Practices Statement (CPS) for the HEBCA • set policy for accepting applications for cross-certification and interoperation with the HEBCA • certify the mapping of policy between the HEBCA CP and applicants’ CP’s • establish any needed constraints in cross-certification documents • represent the HEBCA in establishing its own cross-certification with other PKI bridges • set policy governing operation of the HEBCA • oversee the HEBCA Operational Authority • keep the HEBCA Membership and the HEPKI Council informed of its decisions and activities. 13 HEBCA Operating Authority • The HEBCA OA is the organization that is responsible for the issuance of HEBCA certificates when so directed by the HEBCA PA, the posting of those certificates and any Certificate Revocation Lists (CRLs) or Certificate Authority Revocation Lists (CARLs) into the HEBCA repository, and maintaining the continued availability of the repository to all parties relying on HEBCA certificates. • Specific responsibilities of the HEBCA OA include: – Management and operation of the HEBCA infrastructure; – Management of the registration process; – Completion of the applicant identification and authentication process; and – Complying with all requirements and representations of the Certificate Policy. • Key personnel from the Dartmouth PKI Laboratory were chosen as the HEBCA Operating Authority by the HEBCA PA under the direction of EDUCAUSE (the project sponsor). 14 HEBCA Project - Progress • What’s been done so far? – Operational Authority (OA) contractor engaged (Dartmouth PKI Lab) – MOA with commercial vendor for infrastructure hardware (Sun) – MOA with commercial vendor for CA software and licenses (RSA) – Policy Authority formed – Prototype HEBCA operational and cross-certified with the Prototype FBCA (new Prototype instantiated by HEBCA OA) – Prototype Registry of Directories (RoD) deployed at Dartmouth – Draft of Production HEBCA CP produced – Draft of Production HEBCA CPS produced – Preliminary Policy Mapping completed with FBCA – Test HEBCA CA deployed and cross-certified with the Prototype FBCA – Test HEBCA RoD deployed – Production HEBCA development phase complete – Infrastructure has passed interoperability testing with FBCA – Some minor documentation to finalize – Ready for audit and production operations 15 Solving Silos of Trust Institution FBCA Dept-1 Dept-1 Dept-1 HEBCA CAUDIT PKI USHER CA CA CA SubCA SubCA SubCA SubCA SubCA SubCA SubCA SubCA SubCA 16 Proposed CA-2 CA-1 CA-2 Inter-federations HE BR CA-3 CA-1 AusCert CAUDIT CA-n HE JP FBCA PKI Cross-cert Cross-certs DST NIH ACES Dartmouth HEBCA Texas Cross-certs Wisconsin UVA Univ-N USHER CertiPath SAFE CA-4 CFPKIB CA-1 CA-2 CA-3 17 Challenges and Opportunities • Operational restraints: Offline CA with 6 hourly CRLs requiring dually authenticated sneaker-net with limited staffing – Pre-generate CRLs – AirGap: USB based switch • Audit – What standard? – Cost barriers • Support for Bridge PKIs in current applications – Cross-certificates, path discovery, path validation support is limited in COTS products 18 AirGap MkII 19 Challenges and Opportunities • Community applicability – If we build it they will come – Chicken & Egg profile for infrastructure and applications – An appropriate business plan • Consolidation and synergy – Are USHER & HEBCA competing initiatives? – Benefits of a common infrastructure • Alignment with policies of complimentary communities – Shibboleth / InCommon – Grids (TAGPMA) 20 Bridge-Aware Applications 21 Challenges and Opportunities • Open Tasks – Re-evaluate operating LOA – Audit – Updated Business Plan – Mapping Grid Profiles • Classic PKI • SLCS – Promotion of PKI Test bed – Validation Authority service – Cross-certification with FBCA – Cross-certification with other HE PKI communities • CAUDIT PKI (AusCERT) • HE JP • HE BR 22 Proposed CA-2 CA-1 CA-2 Inter-federations HE BR CA-3 CA-1 AusCert CAUDIT CA-n HE JP FBCA PKI Cross-cert Cross-certs DST NIH ACES Dartmouth HEBCA Texas Cross-certs Wisconsin UVA Univ-N USHER CertiPath SAFE CA-4 CFPKIB CA-1 CA-2 CA-3 23 For More Information • HEBCA Website: http://www.educause.edu/HEBCA/623 • EDUCAUSE IMSP: http://www.educause.edu/IMSP Scott Rea - Scott.Rea@dartmouth.edu 24