Child pages
  • Meeting Notes Sept. 26, 2013
Skip to end of metadata
Go to start of metadata

Thursday, September 26, 2013
Dan Schmiedt,
Chris Small,
kevin mayeshiro -
Michael Lambert -
Michael Van Norman --
Kathy Benninger -
Dale W. Carder -
Deniz Gurkan -
1) Agenda Bash
2) OpenDayLight update and discussion: future TSC participation?
3) NTAC face-to-face, Dallas meeting updates
4) CC-NIE awardees? Deniz will give an update for UH/Rice
5) Application-oriented networking: ONUG, ONF+USIgnite+GENI:
6) Condo of condos: and
6) Any suggestions?
OpenDaylight - Brent may update. TSC engaging in R&D community soon. Accommodate ODL in one of the wg-sdn calls. 30 min late or 30 min early call - ?
NTAC @Dallas: Policies on operating networks I2, metrics to consider, routing policies. BGP play instance to replay what goes on with routes visually. Chris setup I2 BGPlay - useful for large event visualizations. Expand the capability so monitoring can be easier. BGP info ? Existing - Chris ?
I2 BGP RiB Dumps --
There are Archives on tape going back years if anyone wants as well
Mesh topology at L3 - given L2. Should we have mesh? Mesh peerings as necessary when over a threshold of traffic. Feature requests: members control and delegation of OESS with golden circuits, VIP circuit, SDN education and training efforts, tech field days, vendor engagements, routing policies TR-CPS. Tools: L2 traceroute to diagnose/visibility. Support for local function in OpenFlow, long IP circuit - local function can insert a flow definition into the local switch to ?? Local port in OpenFlow: instead of o/p port back into the switch - enable hybrid operation, go back and forth between OpenFlow and IP network. Therefore, can implement the traceroute function since specialized flow will have an IP hook.
Stats collection, PennState (InMon - OpenFlow capable?), Grover on I2 (DeepField? cloud netflow services, use by members through federated identification, members' traffic flow through I2, destinations on I2, etc.).
DeepField? Chris on bg?
Cost associated with usage of links: non-standard metrics for route policies, ? NTAC meeting stayed higher level. perfSONAR boxes, dashboard viewing (ESNet, etc.), display of service availability, campus or external problem - how to help differentiate, funding of usage by researchers.
I2 NOC notifications: e-mail? Archive desirable. E-mail may be down together with network. Quicker notifications. Public information? GNOC was represented. 
Formal report to follow soon.
SDN event, Oct 4-5: CIC, Juniper, GRNOC, I2. 2-day. OF implementations with Juniper. Science DMZ, perfSONAR, SDN (GRNOC). Technical workshop. 
CC-NIE:  U. Houston and Rice got an award to include a 100G link to Innovation Platform.  100G link: small citywide regional network has some existing fiber that connects to UH, Rice, and others.  I2 via LEARN.  Will use grant $$ to upgrade to 100G.  Also include Texas A&M.  Rice also got  a grant for an optical SDN box.
Dale C./ Wisconsin.  Got an award with UCAR/ UVA, a new protocol to handle the transmission of weather data.  Dale will be participating.  May be using DYNES or perhaps GENI infrastructure moving forward.  Have had issues keeping DYNES working.
Heidi Pitcher Dempsey:  Rumors: Univ. Utah: very large InstaGENI rack in new city data center
Northwestern, international interdomain connections science DMZ, L2 SDN.
Brent: first mini-summit. Network virtualization is killer app. Hyperscale openstack deployments - network control to be tied together with controllers. OVSDB, vswitch data plane programming. Quantum -> Neutron. Integration. OF 1.3 code - with northbound app integration. Dec. 9th simultaneous release. Data path implementation OVS, controller side ODL. Campus applicability not clear yet: OVSDB integration (VMWare) - island tied using VXLAN overlaid. Provision OVSDB as gateways. Config management. WAN use cases BGP, PCE, overlays/abstractions. ODL: instead of drivers abstractions for north and south. TSC would be happy to talk to wg-sdn: Dave Meyer invited to a wg-sdn call.
Condo-of-condos: pile of linux boxes in every closet. Central HPC resource, compute $ from research, manage centrally - Clemson matches compute $ $by$ plus all other resources available on the clusters. Move it to national scale. Unsolicited proposal to NSF on this - not funded yet! PhDs to sit with researchers, HPC and networking, facilitators to be funded across the country to realize condo-of-condos. Leverage AL2S to connect the condos. Talk to J. Bottom if want to join. 1600 node condo at Clemson --> became 16000 node! No more random HPC clusters on the campus anymore during the course of last 3 years. Non-traditional departments started using HPC, e.g., humanities! 

  • No labels