Page tree
Skip to end of metadata
Go to start of metadata

Per-Entity Metadata Working Group - 2016-08-31
Agenda and Notes

[Etherpad used to create these notes: Agenda_and_Notes_-_2016-08-31.etherpad]


Dial in from a Phone:
 Dial one of the following numbers:
  +1.408.740.7256
  +1.888.240.2560
  +1.408.317.9253
 195646158 #
 Meeting URL (for VOIP and video):  https://bluejeans.com/195646158
 Wiki space:  https://spaces.at.internet2.edu/x/T4PmBQ

Attendees

  • David Walker, InCommon/Internet2
  • Scott Koranda, LIGO
  • Nick Roy, InCommon/Internet2
  • Ian Young
  • Regrets for not attending:  Chris Phillips /CANARIE
  • Regrets for not attending: Rhys Smith, Jisc
  • John Kazmerzak, University of Iowa
  • Paul Engle (Rice U)
  • Tom Scavo, InCommon/Internet2
  • IJ Kim, Internet2
  • Tom Mitchell, GENI
  • Scott Cantor, tOSU
  • Paul Caskey, Internet2
  • Steve Carmody, Brown
  • Ann West, Internet2/InCommon
  • Tommy Doan, Southern Methodist University


Agenda and Notes

  1. (Discussion of collaboration for the final report before official start of call)
    1. David and Scott will talk about moving final report to Google Docs.
  2. NOTE WELL: All Internet2 Activities are governed by the Internet2 Intellectual Property Framework. - http://www.internet2.edu/policies/intellectual-property-framework/
  3. NOTE WELL: The call is being recorded.
  4. Agenda bash
  5. Distributing an IdP-only aggregate
    1. Ops perspective: https://spaces.at.internet2.edu/x/UgAZBg (Tom Scavo)
      1. Nick: No service is permanent; there will always be change.  Being conservative about change is warranted, but need to balance it with pragmatism
    2. A single IdP-only aggregate or a pipeline triplet (preview, main, fallback)?
      1. Not clear if there is an Ops recommendation?
      2. It's nice having the triplet, but there's a cost for each aggregate.  Also, there's a potential of confusion due to a large number of aggregates.
      3. Consensus from last week was that we don't need the triplet.  It's still our consensus.
    3. Ops claim: "we only get one chance to migrate deployers to a new metadata configuration." Thoughts?
      1. Scott K disagrees
    4. Getting started on this before final report from working group?
      1. Scott and SteveC will get the issue onto tomorrow's TAC agenda so Ops can start quickly.
  6. Commercial CDN latencies
    1. Amazon CloudFront last mile testing: https://media.amazonwebservices.com/FS_WP_AWS_CDN_CloudFront.pdf
    2. Interesting benchmarking exercise: http://goldfirestudios.com/blog/142/Benchmarking-Top-CDN-Providers
      1. It seems we're looking at ~.25 second response times.
      2. CDNs still seem like the right approach, but we need to have our eyes open.
      3. A CDN that's connected to the Internet2 backbone is a good idea, although it's not clear how any InCommon participants are Internet2 connected.
    3. Per-entity metadata file size for InCommon (great data!)
      1. Largest (without signature) is 148K (due to embedded logo)
      2. Smallest (without signature) is 3K
      3. Median is 5.3K
      4. Average is 6.3K
      5. Std deviation is 4.7K
      6. Current overhead of signature is roughly 2.8K
      7. So most per-entity payloads will be roughly 8.1K
    4. What contribution to the actual user experience does the CDN latency make in a MDQ scenario?
      1. How does it compare to the rest of the SAML flow?
      2. How does it compare to the rest of the work the IdP or SP must do?
    5. What benchmarking should be part of the roadmap?
    6. What is the requirement for ongoing monitoring?
  7. CDN features and MDQ
    1. Push mechanism (scp, sftp, rsync, ...)
    2. Origin pull (instead of push)
    3. Purge (invalidation)
    4. Purge All
    5. HTTPS (custom SSL capability, ie. InCommon can provide X.509 cert)
    6. Access logs
  8. SAMLbits CDN (Leif)
    1. Community-driven CDN specifically built for high-trust applications
    2. Can be customized for caching in the CDN flow
    3. Can translate headers, for example, SAML-HTTP
    4. Essentially a varnish cache - wanted to prove that it would be possible to build a community-driven CDN that didn't have to consume a lot of resources at the site
    5. A couple boxes online with I2 operations (TSG)
    6. Doesn't require the network-aware interconnect that a lot of the commercial CDN appliances require
    7. Been running for a couple years and seems to work well
    8. How do we address governance issues as SAMLbits becomes part of our solution?
      1. This a good excuse to start a discussion.  We could think of this as governed by the community of InCommon participants or international federation operators.
      2. If it is to become a part of the solution, it probably needs to go to Steering for a request of some sort  +1
    9. It's not overly difficult to deploy a local SAMLbits node, for example at a campus.
  9. Solution architecture description for the final report
    1. https://spaces.at.internet2.edu/x/u4EQBg
  10. Next call on September 7 at the usual time and place.
  • No labels