Child pages
  • Fall 2014
Skip to end of metadata
Go to start of metadata

Fall 2014 Netgurus Meeting

Internet2 will provide NetGurus a room to be set in a closed Board style conference seating to support 20-30 participants . The meeting will take place the afternoon after the Technology Exchange (October 26-30, 2014) conference concludes. We will have a projector and screen available if needed by participants. Please fill out this survey to gauge interest and suggest topics.



Indianapolis, IN




October 30, 2014


1:15pm - 6:00pm

NOTE: Lunch is on your own. An afternoon break with snacks and beverages will be provided. We will go to dinner as a group.


The Internet2 room block is at the JW Marriot hotel.

Tentative Agenda




Gurus start


Break and networking  (203/Foyer space)


Guruing continues




Gurus and Guests Dinner


Contact Jeffry Handal ( to RSVP and for topics you wish to discuss during the meeting. Attendance limit is 25.



Alan Whinery

Jeffry Handal

Michael Van Norman

Anthony Brock

Rich Cropp

Jason Mueller

Peter Gutierrez

John O'Brien

Mathew Almand

Scott Friedrich

Dan Schmiedt

Chris Konger

Charles Rumford

Adair Thaxton

Charles Rumford

Dave Farmer

Richard Machida

Dan Brisson

Clark Gaylord

Jason Wang

Kade Cole

Joe Breen

Ted Netterfield

Ethan Bateman

David Booner

Karl Newell

Jeff Ambern

David Hunter

Discussion Topics and Notes

Topics are submitted by participants. Please contact Jeffry Handal ( to add an item to the agenda.

  • Network Automation:
    • Ansible: Network switch configuration for automation. Similar to puppet but for network gear.
    • Puppet: Many vendors want to support puppet. For example, Juniper and Cisco want to do official support. Puppet painful to learn. The project gives puppet a nice GUI front end.
    • Python for network engineers: This project is open source project. Please support.
    • Farmer has home-grown tool to do some automation. Contact him if interested.
    • Stay away from:
    1. Spectrum
    2. HPs IMC
  • Best practices: saving battery life on mobile devices on wifi networks.
    • Tweak parameters for nd discovery, especially wireless.
  • SDN and CCIE Grants:
    • Users do not care how their packets get transported. They just want things to work.
    • It has become a management nightmare to keep and operate. People are becoming expensive than equipment.
    • Time being lost in lead time needed to engineer fiber path to labs.
    • Not related to CCIE grant, in general, institutions are not looking at putting sdn (i.e., openflow) on campus network, but are looking for the equipment to have hooks for possible future use.
    • Many L2 forwarding issues.
    • Infrastructure great until it breaks. Researchers do not like to fix it.
    • Message: performance matters and that is it. Definition is 10gig is 8Gbps in real terms.
    • Network engineers do not like openflow configuration.
    • Network speed swamps file speed. The industry needs a parallel and cluster file systems for beyond 10gig speeds.
  • IPv6 penetration (into the Imperial Death Star's main reactor through an unprotected exhaust port).
    1. To not rely on server people, do it on load balancers.
    2. Do it for outward facing services only.
    • rfc 6939 ask vendors for it to help with user tracking and DUID.
    • Survey question: Is anyone considering ipv6 allocations of /64s to clients? Answer: no because ti makes scanning predictable.
  • Anycasting DNS, NTP, etc. services across a system intranet.
    • Consider anycast for any service
    1. Most entities do it using bgp in their mpls core.
    2. Quaga mostly used for dns injection of routes.
    • Resource has been great for not interrupting services.
    • Consider ntp to try it on.
    • This was suggested:
  • DNSSEC challenges/successes (e.g. deployment of SSHFP, TLSA)
    • Delegating DNS only to people that know what they are doing.
    • Delegating with dnssec will be interesting.
    • Do not let others run dns, IPAM helps you get delegated control.
  • ARP/ND cache data collection approaches
    • Most tracking with home grown applications.
      • Hawaii and UNC showed off tools.
    • Dscp value can be used to mark traffic for quarantine and drop.
  • Data center networking; private/hybrid cloud; HPC and data intensive computing; campus networking.
    • SDN and Data Center:
    1. Dynamic provisioning of everything.
    2. Work together with server folks.
    3. Create dmarc point to facilitate troubleshooting. Question remains who is responsible for network especially when it becomes virtualized in a vswitch.
    • Applications need to be resilient. Do not rely on L2.
    • Many applications do not need to be on campus, do not worry about it?
    • Some think it is strategic to have own cloud.
    • HPC will still need to reside in DC.
    • Cost of doing it in Amazon and doing in in local Data Center; must find hidden efficiencies and inefficiencies.
    • Track was is in the cloud? This will help in prioritizing what to fix first.
    • Educate people of expectations of going to the cloud.
  • Research computing: Supporting researchers is challenging.
    • Suggestions from the group on dealing with researchers:
    1. Outreach efforts: reach out to deans of the technical colleges.
    2. Some coaching may be required of users/researchers.
    3. Good model to follow: partner with cyber infrastructure group, whatever that looks like in your institution.
    4. Survey faculty and staff. You will find most issues they face are on the application side.
    • Science DMZ - little interest, most users do not know where in the network they are. They just want it to work really, really fast.
    1. Most people happy with 10gig. That is good because 100gig will drop in price as we wait longer.
    2. Researchers do not like to talk to each other, they ship drives still. Everything is adhoc, word of mouth. No VPs incorporated.
    3. Many people still use scp. Recommend gridftp (Designed to move data.) or hpn-ssh. SCP is limited to around 300Mbps. Try hpn-ssh to improve ssh performance.
    4. Rsync - ipv6 does work, local tunnel issues. TCP window issue possible. rsync has many security knobs. Try no authentication and use knobs.
    5. Recommendation is to encrypt data when it is acquired. Higher risk when sitting on disk than in flight. SSH only protects authentication and not the data stream.
    • Research network segmentation on main campus:
    1. Many instruments around campus with bad applications on them. They are not safe. The critical asset is data. How do you get data to safe location?
    2. Separate traffic with vlan, vrf, and so on.
    3. Push firewalls to the hosts.
    4. High dollar assists need to be protected!
  • Vendors feedback section
    • HP: Comware experiences
    1. Mostly used on the L2.
    2. Division just got sold. HP is going through changes. HP personnel do know what that means for this line of products.
    • Arista good in hpc applications.
    • Huawei
    1. They collect tons of debug information; logging is bad; testing has been bad for a university evaluating them.
    2. Doing backups is aweful.
    • Maru wireless - run away.
  • Topics not covered:
    • VM environment connections to physical networks.
    • MAC address privacy: implications to networks.
    • DNS firewalls
    • SPB
  • Next Meeting
    • Open invite from Nanog still standing.
    • Will inquire to group in January 2015.

Highly motivated group of individuals:

Dinner Options

Barcelona Tapas

Group pictures during dinner:

Thanks for the Support

Many thanks to our sponsors who have made this meeting possible:

Marie Modrell

Cecelia Dove

Kelly Faro