Ever notice how the most interesting conversations happen when you get cloud crowd in a room talking about the latest shiny object? That's exactly what went down during our recent NET+ GCP Tech Share, where we dove headfirst into Google's Agentspace platform and came up for air with some fascinating insights.
The New Kid on the Block
Google is positioning itself as the AI agent platform for higher education, and they're not just slapping an .edu label on existing tech and calling it a day. Google’s Chris Daughery walked us through what appears to be a genuine effort to build something that actually fits how we work in academia – complete with EDU-specific pricing (music to every CIO's ears) and connectors designed for our ecosystem.
The Agentspace platform promises curated connections to the systems we actually use – Canvas, SIS platforms, and the like. But here's where it gets interesting: these aren't just API calls masquerading as intelligence. These are agents that could handle tutoring, help desk support, and who knows what else the creative faculty and staff will dream up.
The Elephant in the Cloud
Of course, no higher ed tech discussion would be complete without someone bringing up the security elephant in the room. Jon shared a sobering tale about a Microsoft Copilot rollback due to data exposure issues – the kind of story that makes CISOs break out in cold sweats and question their life choices.
The good news? Google seems to be taking notes. Agentspace inherits Google Cloud's security protocols, and they're emphasizing granular controls that should let institutions maintain the death grip – er, careful oversight – they need over their data. Still, we all know the devil's in the details, and those connector permissions are going to need some serious vetting.
Speaking of Shiny Objects
While we were geeking out over AI agents, Chris dropped in a casual demo of Google Vids' new avatar feature. Picture this: 30-second scripted presentations delivered by one of 14 different avatars. It's currently in internal testing but should hit the streets in a month or two.
What's Next?
The Google team is working on EDU-specific demos with actual Canvas data integration, finalizing pricing that won't require selling a kidney, and building out those educational system connectors. Meanwhile, several institutions are already kicking the tires on AI agents for various use cases.
We are planning a deeper dive into Agentspace in an upcoming NET+ GCP quarterly call, and honestly, I'm curious to see where this goes. The platform could be genuinely useful, or it could join the growing pile of "seemed like a good idea at the time" projects that make for great lightning talks at Cloud Forum.
Either way, it's going to be an interesting ride. Anyone else ready to see what our community can build with AI agents?
Remember the old-fashioned barn raisings where neighbors would gather to help build something useful for the community? Well, we just had our first NET+ GCP "Barn Raising," and let me tell you, it was every bit as collaborative – just with more debugging and fewer calluses.
The Blueprint
Google's Rapid Innovation Team, together with the cloud team at Washington University in St. Louis, developed a slick campus engagement app that lets students discover events, set up profiles, and customize their experience based on interests. Think of it as the digital equivalent of that bulletin board everyone actually wants to read, complete with AI-powered chat functionality and cloud database backing.
The beauty of this barn-raising approach? We all started with the same code base, but each institution could customize it to fit their unique campus culture. As WashU’s John Bailey pointed out during our session, it's surprisingly straightforward to swap out source URLs and rebrand the whole thing to match your school's colors and structure.
Rolling Up Our Sleeves
Stone Jiang from Google walked us through the deployment process step by step, and true to form for any good barn raising, we hit our share of snags. Jonathan ran into authentication issues with his university email (because when has enterprise authentication ever been simple?), and we had the usual Firebase deployment hiccups that make you question your life choices.
But here's the thing about barn raisings – when someone hits a problem, the whole community jumps in to help. We troubleshot GitHub authentication, wrestled with index file overrides, and collectively figured out domain authorization errors. When Stone suggested replacing Google.com in the test code with university domains, Jeff Nessen weighed in on the organizational policy restriction changes necessary to make it work.
Beyond the Foundation
The real magic happened when we started talking about what comes next. How do you keep event data fresh without resorting to web scraping? (Stone's investigating that one.) Could Google BigQuery handle the data streaming? (Jonathan's exploring that angle.) What about email services and scheduling for production deployment?
Each institution walked away with their own customized version of the app, but more importantly, we all gained insight into how our colleagues are approaching similar challenges. John's already thinking about higher-level demos for WashU's new fiscal year, and Jonathan's diving into the backend possibilities.
The Real Build
Sure, we built a useful campus engagement app, but the real construction project was strengthening the connections between our institutions. When you get a bunch of higher ed technologists together with some good code and a shared problem to solve, interesting things happen.
Got ideas for our next collaborative build? Drop me a line – I'm always curious about what this community wants to tackle together.
Estimated reading time: 2 minutes
If you missed our second NET+ AWS barn raising event this May, you missed something special. I watched as participants from multiple institutions rolled up their sleeves and deployed Indiana University's Automated Transcription Service (ATS) in real-time, troubleshooting challenges together and celebrating successes. Let’s look closer at what went down.
The Challenge: Secure Research Transcription
Before we get into the barn raising event itself, let me give some context of why we chose this project in the first place. Transcribing interviews and focus groups typically costs researchers five hours of human time per audio hour—sometimes exceeding $10,000 per project. Commercial services like Otter.ai pose security risks, lacking the Business Associate Agreements required for sensitive research data.
Indiana University's ATS solves this by leveraging existing university AWS contracts with built-in security approvals. The result? Researchers get enterprise-grade security without complexity:
- Upload files, download transcripts—no technical skills needed
- Zero cost to researchers
- Word documents formatted for qualitative analysis software
Hands-On Success
I participated in this workshop and found it incredibly smooth. AWS provided sandbox environments where we could experiment freely. When we hit a hiccup during deployment—some versioning issues with the installation tools—I watched the community spring into action. With help from AWS experts and fellow participants, we resolved it swiftly.
The serverless architecture we deployed was elegantly simple:
- Audio uploads trigger Lambda functions
- Amazon Transcribe processes the files
- Automated conversion to researcher-friendly formats
- Built-in data retention for compliance
This system has already handled 677 files across 57 projects in 2024, all managed by a single coordinator.
Community Building in Action
What struck me most was the collaborative energy. When deployment challenges arose, participants immediately jumped in to help each other find solutions. Several schools successfully deployed ATS during the session, and I'm thrilled to report that institutions like UMBC continue using it today for their research transcription needs.
The beauty of ATS being open source sparked rich discussions about future possibilities. Participants brainstormed enhancements like building web interfaces for easier researcher access and expanding language support beyond English. This truly embodied the barn raising spirit—a community coming together to build infrastructure that benefits everyone.
Resources and Next Steps
Indiana University has made their ATS service information publicly available, providing a model for other institutions to follow.
Deploy Your Own Instance: The GitHub repository contains everything needed to bring secure transcription to your institution. The NET+ community stands ready to help with implementation questions.
Join the Movement: These barn raising events showcase the power of collaborative infrastructure development. Watch for announcements about the next workshop where you can contribute to building practical solutions for research challenges.
Estimated reading time: 2 minutes
The past six months of AWS Landing Zone Accelerator Community of Practice meetings have shown me something documentation can't provide: real stories from institutions dealing with the same LZA challenges, and at times, each solving them differently. The most valuable parts weren't the success stories, but the honest discussions about what actually works during implementation.
When Theory Meets Reality
I heard about the University of Denver tackling something that makes most people nervous: a major version upgrade from LZA 1.6 to 1.11. Their success provided the rest of us with a practical reference for planning similar moves. Sometimes the best learning happens when someone else goes first and lives to tell about it.
Tufts University continued to serve as our knowledge anchor, sharing insights from their greenfield deployment. When newcomers like University of Montana joined with basic network configurations, I could see how Tufts' shared experience and advice provided a roadmap rather than forcing them to start from scratch.
What fascinated me was hearing about University of Montana's transformation throughout these months. They began as interested observers and gradually progressed to confident implementers with guidance from Tufts and the broader group. Their journey from "curious newcomers" to "active deployers" perfectly captures why this community exists.
The Problems That Keep Coming Up
Several pain points emerged repeatedly, creating those collaborative problem-solving moments I love hearing in these sessions. MIT discovered broken documentation links, specifically 404 errors on LZA TypeDocs. While AWS took this back to the service team, it highlighted something we all know: documentation challenges are ongoing.
A significant bright spot came with the December 2024 release of LZA v1.11, which directly addressed our collective complaints about long pipeline execution times. Hearing about AWS delivering increased performance through parallelization and new StackSets operational preferences felt like responsive product development actually working.
At the time of writing this blog, the LZA has released version 1.12.3.
University of Colorado Boulder's exploration of Gateway Load Balancer with Palo Alto Marketplace NGFW caught my attention because it represented the kind of cost-conscious innovation I see across higher ed. When AWS connected them directly with colleagues who had implemented similar solutions, it demonstrated the direct expertise sharing that makes these sessions valuable.
Pipeline management conversations became our recurring theme. Colorado Boulder's practical approach using grep to parse changes resonated with institutions still figuring out LZA's sometimes opaque change process. The community's unofficial best practice emerged organically: "fail forward" rather than attempting to delete and recreate resources.
Compliance Reality Check
The compliance discussions revealed how diverse institutional requirements really are. Tufts targets CIS/AWS best practices with HIPAA aspirations, University of Denver focuses on AWS best practices, while MIT works toward 800-53 compliance with CMMC additions. Rather than pretending there's one right answer, participants shared interpretations and planned approaches.
NSPM-33 requirements added complexity for institutions handling research data, requiring durable identity and comprehensive access tracking. What I appreciated was how participants shared interpretations rather than claiming definitive answers because honestly, we're all figuring this out together.
The feature request process matured from ad-hoc suggestions to a structured Google Form. By February, we had submitted ten formal feature requests! Hearing about this evolution from scattered ideas to organized advocacy showed the community finding its voice.
What's Coming Next
The momentum continues into 2025. AWS has committed to another roadmap session with the LZA development team and a planned TechEx presentation this December. But more importantly, we've established a reliable rhythm of practical knowledge sharing.
The AWS LZA Community of Practice has become exactly what it set out to be: a place where higher education practitioners can find colleagues who have faced similar challenges and lived to share the solutions.
Be sure to check out the other blog posts we've written. As always, feel free to send any feedback to tmanik[at]internet2[dot]edu.
You know that feeling when you get excited about a shiny new tool, only to discover it's like trying to fit a square peg into a round hole? That's exactly what came up during our latest NET+ GCP community call when we dove into the realities of Google Cloud’s GPU clusters in higher education.
The Promise vs. The Practice
Google's GPU and TPU adoption program sounds great on paper – who doesn't want powerful computing resources for their researchers? But as Ethan from Carnegie Mellon shared with refreshing honesty, the reality is a bit more complicated. His institution was the first to take the leap with Google’s new GPU offering. His cluster is running at a steady 60% utilization instead of the hoped-for 80-85%, and it turns out that researchers have this pesky habit of bringing their own workflows and dependencies that don't always play nicely with toolkit solutions.
The one-size-fits-all approach? Yeah, that doesn't scale well when you're dealing with the beautifully chaotic world of academic research. File system quotas, permissions, storage issues not covered in the initial contract – it's like trying to organize a faculty meeting, but with more GPUs and somehow even more complexity.
The Art of Managing Expectations
Jeff Nessen, Google's lone industry architect for higher education (talk about wearing multiple hats), acknowledged what many of us have experienced: the current selection criteria for these programs might be a tad too simplistic. Instead of assuming every institution wants the same thing, maybe we should ask more questions upfront about what researchers actually need.
Ethan's suggestion of dedicated environments for specific time periods that can scale up or down makes a lot of sense. Think of it as giving each researcher their own sandbox instead of trying to manage a massive shared playground where everyone's fighting over the good toys.
There was also some discussion about Google sales representatives advising some customers to purchase GPU/TPU offerings separately from NET+ agreements. Nothing quite undermines community collaboration like mixed messages from the sales team, but Jeff committed to addressing that particular headache. Needless to say, if you are considering the Google GPU/TPU program, you can do it on your NET+ GCP agreement with no downside. Any discounts or enticements they offer should be able to run through your existing contract.
The Real Lesson
Here's what I appreciate about conversations like this: we're getting past the marketing materials and into the messy realities of implementation. Ethan's transparency about what's working (and what isn't) helps all of us make better decisions for our institutions.
The takeaway? These tools can be powerful, but they need to be tailored to how higher education actually works – not how we wish it worked.
Got your own GPU cluster war stories? I'd love to hear them – misery loves company, and shared experience makes us all smarter.
While our recent NET+ GCP community call had the usual updates about upcoming events (mark your calendars for the May 13 Barn-raising and the final 4 seats at Cloud Forum), the real meat of the discussion centered on a question that vexes many of us: how do you actually manage networking across multiple cloud environments?
The Washington University Approach
John Bailey from WashU shared their impressive – and slightly intimidating – approach to multi-cloud networking. Being geographically distant from major interconnects, they've installed edge networking devices in Ashburn and another location specifically for data security requirements. Their goal? Direct connections between their own routers and Microsoft routers, essentially treating cloud environments as extensions of their data center.
But here's what I appreciated about John's presentation: he acknowledged this isn't the "mere mortal" version of cloud networking. For the rest of us, he suggested more practical alternatives like redundant connections in one region or using partner interconnects instead of owning the network gear outright.
The Pragmatic Reality
When Bob asked how others are handling multi-cloud networking, the responses revealed the beautiful diversity of approaches in higher education:
Stratos from NYU keeps it simple with separate connections and strategies for each cloud provider – AWS, Azure, and GCP each get their own treatment.
Ethan from Carnegie Mellon mostly relies on VPNs and internet connections to AWS, and it's working well for them. His key insight? "The workload drives the design and architecture." Sometimes the simplest solution is the right solution.
Ezequiel from UCF highlighted a common split we see in higher ed: their enterprise side is Azure-heavy while research gravitates toward AWS. This creates interesting challenges when moving data between network segments, but it reflects the reality that different user communities have different needs.
The Tipping Point
Scott from Internet2 provided valuable context on when to consider dedicated interconnects versus VPNs. The drivers are typically high throughput requirements, predictability, and the need for rock-solid reliability. There's also a financial tipping point where data transfer costs make interconnects more economical than VPN solutions.
The Bottom Line
What struck me most about this conversation was the reminder that there's no one-size-fits-all approach to multi-cloud networking in higher education. WashU's enterprise-grade solution works for their critical workloads and geographic constraints. CMU's pragmatic VPN approach serves their technology environment. UCF's split strategy reflects their organizational reality.
The key is matching your networking strategy to your actual requirements, not building the most impressive infrastructure on paper. Sometimes the "mere mortal" solution is exactly what you need.
How are you handling multi-cloud networking at your institution? Are you building direct interconnects or keeping it simple with VPNs?
Ever been to one of those conferences where you're simultaneously overwhelmed by the possibilities and skeptical about what actually applies to your day-to-day work? That's exactly how I felt watching our recent NET+ GCP community call, where Chris Daugherty walked us through his Google Next 25 experience.
The AI Everything Show
Let's be honest – Google Next was the "AI, AI, AI" show, but what tech conference of the past two and a half years hasn’t been? Chris came armed with a 158-slide recap deck (because apparently that's what happens when you try to capture everything Google announced), but thankfully he spared us the full experience. Instead, he gave us the highlights: Gemini 2.5 Flash and Pro, Agent Space for building multi-agent systems, and Firebase Studio for rapid app development.
What struck me most was Chris's demo of NotebookLM. Here's a guy who just spent three weeks on vacation (including what sounds like an epic Japan cherry blossom trip), came back to a mountain of Google Next content, and used AI to get himself up to speed. He fed the tool blog posts, PowerPoint decks, YouTube videos – everything – and had it generate an 18-minute audio overview tailored specifically for education. That's not just clever; that's practical.
Tim Champ's feedback from Next was telling. As a first-timer, he was blown away by the sheer scale of expertise available – describing it as a "Costco-sized room of experts" where you could get world-class support just by waiting in line. That's the kind of access that makes conferences worthwhile.
The Reality Check
But let's talk about what the higher ed really wants to hear about. Adam Deer joined to discuss Google's Cloud GPU TPU Acceleration Program, which promises H100 access essentially at parity with buying and hosting your own hardware over three years. The twist? It's designed for institutions ready to commit to 100+ GPUs, complete with flexible consumption models and professional services. It also facilitates quick and seamless upgrades as new processes are released.
The Bottom Line
While Google is positioning itself as the most enterprise-ready cloud for the AI era, what I appreciate about conversations like this is cutting through the marketing to understand what actually works in academic environments. Chris's transparency about using these tools in real-time, Adam's straightforward pricing discussions, and the community's honest questions about implementation challenges – that's where the real value lies.
The GenAI future marches on, but getting there requires cutting through the hype and focusing on what actually moves the needle for our institutions. Tune in next month, there’s sure to be more!
Ever set up what you thought was a bulletproof Google Cloud Organization, only to discover mysterious projects appearing like uninvited guests at a dinner party? You're not alone. Our recent NET+ GCP strategy call with Google's Jeff Nessen dove into the messy reality of how Google's various services can wreak havoc on your carefully managed cloud organization.
The Hierarchy That Rules Them All
Jeff started with a crucial reminder: Google Workspace sits at the top of the entire Google ecosystem hierarchy. Your Workspace super admin can essentially override anything happening in your GCP environment. While this gives them ultimate control, it also means that when that super admin retires (as Jeff has seen countless times), you're potentially locked out of critical billing and administrative functions. The solution involves opening support cases and getting letters from C-level executives – not exactly the streamlined process anyone wants.
The Usual Suspects
The conversation revealed the sources of several common expected surprises for GCP administrators:
Apps Script turned out to be particularly sneaky. As one participant discovered, a computer science class assignment using Apps Script automatically created dozens of GCP projects, completely bypassing the project creation restrictions they thought they had in place.
Google Analytics and Google Ads can suddenly start appearing in your GCP billing when users enable BigQuery integration features. The challenge? Figuring out which department in marketing set this up and should be paying for it.
Terra.bio and NIH's All of Us create projects that bill back to your organization, often requiring detective work to trace costs back to the right researcher or grant.
The Billing Maze
One of the most practical insights was about billing account management. Jeff emphasized that being a GCP org admin doesn't automatically make you a billing administrator – these are separate permission sets. For NET+ subscribers using resellers like Burwood, this actually works in your favor, at least for the billing IDs on contract. Your reseller can help clean up orphaned billing accounts when people leave, since through them the distributor Carahsoft ultimately holds the billing super admin rights.
Real-World Solutions
Craig from Yale shared a practical approach: they work one-on-one with users to grant temporary access for linking billing accounts, then remove those permissions to prevent unauthorized project creation. Jon from University of Washington praised Burwood's help in tracking down "surprising" services that appear on bills.
The Organizational Reality
An interesting sidebar emerged: at most institutions, Google Workspace (collaboration/productivity) and GCP (cloud) are managed by completely separate teams. In Google’s worldview, all you would use is Google, and so there is no conflict. In reality, at most institutions, the cloud team is trying to support and develop strategies around multiple cloud platforms and the collaboration team is doing the same in their space. We are organized by function, not by vendor. If there is good communication and collaboration, this is a non-issue operationally, but this can create its own compliance challenges, considering that the Workspace team has ultimate override capabilities over the Cloud team's carefully constructed security policies.
The bottom line? Managing a Google Cloud organization isn't just about GCP policies and permissions. It's about understanding the interconnected web of Google services and planning for the inevitable exceptions that will test your governance model.
What unexpected Google services have surprised you in your GCP environment? The community would love to hear your war stories.
In case you missed it, here are the latest updates from the NET+ Google Workspace for Education (GWE) program, along with the new Gemini features.
NET+ GWE Program Updates
- 2025 Renewals
If your institution is up for renewal this year, we've streamlined the process—universities only need to sign the Reseller Service Order (RSO) form with the 2025 pricing exhibit. Reach out to your Reseller to request the 2025 renewal paperwork. If you have any questions, check out the 2025 Frequently Asked Questions, or reach out to us at netplus@internet2.edu.
- How to report misuse of Google tools?
As part of a joint effort between the NET+ GWE Service Advisory Board, Internet2, and REN-ISAC, a dedicated intake channel is now available to report compromised Google products, including Google Forms, Gmail, Google Drive, Docs, Sites, Drawings, Sheets, and Slides. As announced on March 4, REN-ISAC has enrolled in the Google Workspace Priority Flagger Program.
To report a compromised Google product, simply send the link to soc@ren-isac.net. This program is open to all universities, and your campus does not need to be a paid Google Workspace customer to participate. If your institution is a member of REN-ISAC, your point of contact for REN-ISAC should have received an informational message on March 4 titled “Report Google form phishing to REN-ISAC”.
Upcoming Events
- Strengthen Your Defenses: Essential Cybersecurity Tools within Google Workspace for Education (virtual webinar)
- Date: April 2 at 3pm ET
- Topic: This session will cover key security features, such as user access controls, data encryption, and real-time monitoring, to help educational institutions safeguard sensitive data. Learn practical strategies to improve cybersecurity and ensure a safe digital environment for students, faculty, and staff.
- Audience: CISOs, security professionals, directors, managers, and workspace administrators
- Registration URL: https://internet2.zoom.us/webinar/register/WN_z0HC1f6cQzapapU4Dq5bQg
- The Internet2 Community Exchange Conference brings together research and education leaders to explore cutting-edge technology, collaboration, and infrastructure. This year, the Google team will be in attendance, sharing insights on their latest innovations and partnerships. Sessions will include:
- AI on Campus: Balancing Innovation and Data Security
§ Date: Tuesday, April 29 at 4:00pm
- AI - Powered Partnerships: How we are revolutionizing student success and campus operations
- Date: Thursday, May 5 at 8:40am
Google Updates
New Gemini app features - Available at no cost!
All Gemini app users (18+) will have access to the following features free of charge:
- Gems
Gems are customized versions of Gemini that you can personalize to be experts on any topic. You can get started with a Gem that is premade by Google, like Learning coach or Brainstormer, or create your own custom Gem with the option to ground it in your own sources to provide even more helpful responses — no coding required. You can learn more about how education institutions are using Gems here.
- Deep Research
Deep Research in the Gemini app can save you hours of time as your personal AI research assistant, searching and analyzing information from across and synthesizing it into comprehensive reports with citations in just minutes. Education institutions are using Deep Research to quickly get up to speed on various topics, get help with grant writing and lesson planning, and so much more. You can learn more about Deep Research in this video.
Gemini users 18+ can try Deep Research free of charge with five reports per 30 day period, and users with a Gemini Education license get full usage to save even more time on their most complex projects.
Gemini LTI™ is now live! Gemini LTI™ enhances the educational experience for both educators and students by providing AI-driven tools and features powered by Gemini, directly within their LMS environment. Gemini LTI™ integrates seamlessly with Canvas by Instructure and Powerschool Schoology Learning, empowering users to access advanced AI tools in their everyday learning and teaching.
To stay up to date on the latest Gemini updates, visit: https://blog.google/products/gemini/
We appreciate your continued engagement with the NET+ Google Workspace for Education program. If you have any questions, feel free to reach out to netplus@internet2.edu. We look forward to seeing you at our upcoming events!
You know what I love about our NET+ GCP community calls? We dive straight into the weeds. This week's conversation was a perfect example – equal parts practical problem-solving and "wait, how does that actually work?"
The Skills Boost Reality Check
Google's Cloud Skills Boost program came up again, and it's clear some organizations are getting real value from it. Charles from NYGC shared how they're using it seamlessly through their reseller, while Ezequiel talked about incorporating it into their onboarding process for new projects. But here's the thing that caught my attention: the confusion between Skills Boost courses that also exist on Coursera. Because apparently having multiple learning platforms isn't complicated enough already.
The big news? Google announced at Next 2024 that each public sector institution gets free licenses. Chris hinted there's another announcement coming soon on this front, so stay tuned.
Marketplace Madness
Ethan from Carnegie Mellon dropped a question that made everyone lean in: "Has anybody successfully used a Carahsoft billing account with a marketplace product like Databricks?" Doug from Burwood's response was basically "it's complicated" – which is consultant-speak for "buckle up." The Google Cloud Marketplace doesn't play nicely with a reseller in the mix, which creates all sorts of fun billing gymnastics.
This is exactly the kind of real-world friction that doesn't show up in vendor presentations but absolutely matters when you're trying to implement these solutions.
The Teaching Credits Conundrum
Kelly from UW-Madison brought up something that's been bugging a lot of us: the current hold on Google Cloud Faculty teaching credits. Nobody seems clear on how long this pause will last, which makes planning for classes and workshops a bit like shooting in the dark. Meanwhile, she's planning to demo GCP Cloud Lab with automatic budget shutdowns – because nothing says "responsible cloud usage" like hard stops when you hit your spending limit.
The Google Influence Elephant
We spent some time discussing Google's broader influence on GCP, touching on Firebase, Google Ads, Maps API, and AppScript. The "no users under 18" policy came up again, along with some cryptic recaptcha changes that Kelly's still waiting to hear more about. It's a reminder that when you're in the Google ecosystem, you're not just dealing with cloud infrastructure – you're navigating an entire constellation of interconnected services.
The real question hanging over everything? How many institutions have NotebookLM turned on, and what's the feedback been? Sounds like the topic for our next deep dive.
Chris's Dream (And Ours)
Google’s Chris Daugherty shared his vision of leveraging CloudLab to create a seamless Colab Enterprise experience. Currently, the billing works fine, but connecting the free Colab interface with paid VMs is still clunky. It's the kind of integration challenge that sounds simple until you try to actually build it.
Got your own implementation war stories? I'm always curious about the gap between vendor promises and campus reality.
Estimated reading time: 4 minutes
If you missed our March NET+ AWS Tech Jam, you missed a thought-provoking conversation about how leading institutions are completely rethinking their approach to cloud provisioning. Penn State University's journey from manual processes to cloud automation sparked insights that could reshape how your institution empowers researchers and students while maintaining financial control.
Beyond the "Build It and They Will Come" Fallacy
The discussion quickly moved past outdated cloud provisioning philosophies to reveal a fundamental truth: successful cloud environments start with understanding what users actually need, not what IT thinks they might want.
Shane Heivly from Penn State University described their eye-opening shift from what he called "2018-style manual provisioning" to a more sophisticated user-centric approach. This isn't just about technical workflows—it's about transforming how institutions conceptualize their relationship with cloud resources.
"The backwards approach is critical," noted one participant. "When you understand what researchers and graduate students truly need to accomplish, you design systems that actually get used rather than bypassed."
Solving the Higher Ed "Snowflake" Challenge
What makes the academic environment so challenging for cloud administrators is the extraordinary diversity of use cases. From high-performance computing clusters processing climate models to AI workloads analyzing literary texts, every research group presents unique requirements.
Rather than attempting to build one-size-fits-none solutions, forward-thinking institutions are creating flexible provisioning frameworks that:
- Recognize different levels of cloud maturity among users
- Provide appropriate guardrails without stifling innovation
- Integrate with familiar campus systems like ServiceNow
- Scale to accommodate growing demands
The Financial Control Breakthrough
Perhaps the most compelling part of the discussion centered on how automated provisioning is revolutionizing financial control—without creating administrative bottlenecks.
Early adopters have implemented sophisticated tagging strategies that enable granular cost attribution while empowering users with real-time visibility into their spending. Rather than discovering runaway costs at month's end, institutions now deploy automated monitoring tools that can alert users or even shut down idle resources based on predefined policies.
One participant described how their institution reduced unexpected cloud expenses by 73% in just four months using this approach—while actually increasing cloud adoption rates.
From Theoretical to Practical: Implementation Insights
What separated this Tech Jam from typical cloud discussions was the practical implementation roadmap that emerged. Participants shared specific tactics for overcoming common obstacles:
The "vending machine" concept emerged as a particularly compelling model, where users can self-service their cloud needs within appropriate boundaries. Rather than attempting to build comprehensive solutions immediately, participants advocated for starting with minimal viable products focused on common use cases, then expanding based on actual usage patterns.
Identity and access management strategies proved to be a critical foundation, balancing user autonomy with institutional security requirements through thoughtfully designed permission structures.
Building the Community Knowledge Base
The most valuable aspect of the Tech Jam was the rich exchange of real-world experiences that transcended vendor talking points. Participants shared struggles, successes, and everything in between—creating a knowledge base far more valuable than any white paper.
Multiple institutions shared how they've adapted their existing IT service management platforms to support cloud provisioning, allowing them to leverage familiar workflows rather than creating entirely new processes.
Making It Real on Your Campus
Ready to transform your cloud provisioning? The community highlighted several practical next steps:
- Arrange a consultation with your AWS Solutions Architect to evaluate your current provisioning approach
- Join the upcoming hands-on workshop series focused specifically on implementation strategies
- Connect with peer institutions through the Internet2 NET+ AWS community forums
- Access the shared resource repository containing sample workflows, policies, and lessons learned
The March Tech Jam reinforced that cloud provisioning isn't just a technical challenge—it's fundamentally about enabling research and education while maintaining appropriate controls. By focusing on user needs first and building iteratively, institutions are creating cloud environments that truly meet the unique demands of higher education. Here is the recording for you to view on-demand (unfortunately, due to user error, the recording started half way through).
Don't miss next month's NET+ AWS event. Take a look at our calendar for upcoming events that you might be interested in. These monthly sessions continue to bring together innovative thinkers in higher education cloud computing to solve real-world challenges.
Be sure to check out the other blog posts we've written. As always, feel free to send any feedback to tmanik[at]internet2[dot]edu.
Estimated reading time: 4 minutes
If you missed our February NET+ AWS Tech Share, you missed a fascinating look at how institutions are reimagining their cloud strategies in response to shifting research demands and budget realities. From Penn State's innovative platform approach to UMBC's compliance-focused landing zone implementation, the discussion revealed practical solutions that could transform how your institution delivers cloud services.
Beyond Account Provisioning: The Platform Evolution
The conversation quickly turned to a compelling vision shared by Penn State in response to their campus consolidation initiative. Rather than continuing with traditional account brokerage, they're developing a comprehensive platform-as-a-service (PaaS) approach specifically designed for research workloads.
"What does the cloud team become?" emerged as a central question as Penn State outlined their strategy to provide pre-configured environments with standardized guardrails that researchers can use immediately—without needing cloud expertise. Their approach includes:
- Developing Terraform scripts that create account foundations with pre-configured endpoints
- Focusing on specific high-value services like EC2, EMR for data processing, and AI services like Bedrock and SageMaker
- Moving from account administrators to platform architects and research enablers
This evolution represents a significant shift in how central IT delivers value to researchers. By handling infrastructure complexity behind the scenes, Penn State is creating an environment where researchers can focus on their work rather than cloud management.
Balancing Compliance and Innovation in Healthcare Research
UMBC shared their journey implementing the AWS Landing Zone Accelerator (LZA) specifically for HIPAA compliance, with HITRUST certification on the horizon. Their architecture offers valuable insights for institutions balancing strict compliance requirements with research agility:
- Using a separate Master Payer account dedicated to healthcare workloads
- Designing environments specifically for lift-and-shift migrations
- Exploring Kion integration for enhanced governance
The discussion highlighted how the monthly LZA Community of Practice calls have become an essential resource for institutions navigating similar compliance challenges. These sessions bring together practitioners solving real-world problems with AWS architects offering implementation guidance.
Student Empowerment: Cloud Access in the Classroom
Two contrasting approaches to student cloud access emerged during the discussion. UVA's data science school is pioneering a service catalog approach that provides students with controlled yet powerful AWS environments, including access to TRN1.2xlarge instances for AI model training.
This contrasts with William & Mary's Kubernetes-based JupyterHub implementation, which offers simplified access for anyone with a W&M email address without requiring individual AWS accounts. Both examples demonstrate how institutions are creating purpose-built educational environments that balance security with accessibility.
What made these examples particularly valuable was hearing the practical implementation details directly from the teams involved—insights you can only get from peer institutions tackling similar challenges.
Practical Root Access Management Strategies
The session revealed diverse approaches to a critical operational challenge: managing root access to AWS accounts. From Penn State's targeted use cases to UVA's Control Tower implementation that eliminates password-based root access entirely, the community shared battle-tested strategies for balancing security with operational needs.
Several participants highlighted AWS's new capability to close accounts centrally without root credentials—a significant operational improvement that many weren't aware of before the discussion. These practical insights show how the community develops governance frameworks that balance security with operational efficiency.
What's Next: Learning Opportunities and Events
The AWS community calendar is packed with opportunities to continue these conversations:
- Internet2 Community Exchange (April 28-May 1, Anaheim)
- Higher Education Cloud Forum (May 20-22, New York)
- AWS Public Sector Summit (June 10-11, Washington DC)
- AWS IMAGINE (July 29-30, Chicago) – internal CFP deadline is April 22
For those looking to build cloud skills, the CICP CLASS Voucher Program offers specialized training including AWS Security in the Cloud, Solutions Architect Associate Certification, and Container Orchestration for Research Workflows.
March is Tech Jam month—a perfect opportunity to bring your specific cloud challenges and work through them with peers and AWS experts. These collaborative working sessions provide immediate, hands-on help with your most pressing implementation questions.
Join the Conversation
As higher education continues to face budget constraints while research demands grow more complex, these community conversations become increasingly valuable. The practical insights shared during this session—from platform architecture to compliance strategies—represent knowledge that would take months to develop independently.
NET+ AWS Tech Shares take place every other week. The next Tech Share promises to continue exploring these themes with practical demonstrations and real-world examples. Will your institution be represented in the discussion, or at least be there to listen in?
Be sure to check out the other blog posts we've written. As always, feel free to send any feedback to tmanik[at]internet2[dot]edu.
Estimated reading time: 4 minutes
If you missed our February NET+ AWS Strategic Call, you missed a lively discussion on one of the most pressing challenges facing research institutions today: how to design networking infrastructure that can handle massive data transfers without breaking the bank. AWS Solutions Architects Kevin Murakoshi and Nick Kniveton shared strategies that could save institutions thousands in unnecessary costs.
The Hidden Costs of Moving Research Data
The conversation quickly revealed how easily networking costs can get out of hand when supporting data-intensive research workloads. While compute often gets the spotlight in discussions about cloud costs, it's the data movement that can unexpectedly dominate budgets in research environments.
Research computing presents unique challenges: massive datasets transferred between compute resources, sporadic high-intensity processing periods, and collaborations that span multiple accounts, projects, and regions. Each of these characteristics creates potential cost traps.
Kevin walked through a compelling real-world example showing how research projects can unknowingly spend thousands of dollars monthly just on cross-availability zone data transfers—a sobering reminder that even small per-gigabyte costs add up quickly at research scale if you do not architect your workloads thoughtfully.
Strategic Options for Multi-VPC Research Environments
The AWS team examined three strategies with remarkably different cost implications:
VPC peering works beautifully for straightforward connections between two research environments, remaining the most cost-effective option with free data transfer within the same availability zone.
Transit Gateway shines as networking needs grow more complex. This hub-and-spoke model simplifies management, though it introduces data processing fees of $0.02/GB.
VPC Sharing emerged as particularly well-suited to the ephemeral, high-burst nature of research computing. This approach allows multiple AWS accounts to share a single VPC infrastructure.
VPC Sharing: A Game-Changer for Research Computing
Nick explained how VPC sharing aligns perfectly with the realities of research computing, generating significant interest during the session.
The separation of duties concept clearly resonated—network engineers maintain central control while researchers maintain autonomy over their workloads. This approach has the potential to transform current architectures at many institutions.
By sharing NAT gateways and other networking resources across multiple research projects, institutions can dramatically reduce duplicative costs. Early adopters have seen significant networking cost reductions while improving performance for their researchers.
Real-World Implementation Challenges
The discussion dug into practical implementation concerns including limitations (keep participant accounts under 100 per VPC), billing mechanics (VPC owners pay for infrastructure while participants pay for resource usage), and migration strategies.
The AWS team also addressed current limitations in tracking detailed data transfer costs. While AWS has received feature requests for improved cost attribution capabilities, they outlined practical workarounds for the present.
Community Knowledge Sharing
What made this call especially valuable was the rich exchange of real-world experiences from the community. The session highlighted examples of custom infrastructure-as-code tools that have streamlined VPC sharing implementation, and practical applications supporting multi-institution research collaborations.
Getting Support for Your Implementation
Need a strategic architecture review? Your AWS Solutions Architect can provide personalized guidance tailored to your specific research environment needs.
Ready for hands-on implementation help? The team offers "tech jams"—collaborative working sessions with AWS experts where you can tackle specific networking challenges together.
Looking for peer advice? The Internet2 NET+ AWS community provides ongoing forums where you can connect with colleagues who have already implemented these approaches.
Join Us Next Time
If this recap has you wishing you'd been part of the conversation, make sure you don't miss our next NET+ AWS Strategic Call in March. These monthly sessions bring together bright minds in higher education cloud computing to tackle common challenges and share innovative solutions.
Be sure to check out the other blog posts we've written. As always, feel free to send any feedback to tmanik[at]internet2[dot]edu.
Estimated reading time: 4 minutes
The January AWS NET+ Tech Share meetings brought together members from across the research and education community to discuss cloud migration strategies, training initiatives, and innovative approaches to managing AWS resources. Here's what you need to know from our first meetings of 2025.
Community Updates and Events
Several significant events marked the beginning of the year:
- The NIH Genomic Data Sharing requirement webinar provided valuable insights into AWS compliance strategies, with recordings and slides now available for review.
- The R&E FinOps Virtual Conference on January 23rd brought together finance and technology professionals to discuss cloud cost optimization.
Institutional Highlights
Loyola Marymount University's Migration Journey
LMU shared their ambitious migration plans, including:
- A large-scale migration involving 100 servers, split between general infrastructure and Banner system
- Partnership with AWS, including direct support from the AWS team and the Migration Acceleration Program
- Strategies for managing the migration with a lean team
University of Virginia's Service Catalog Initiative
UVA is developing a Service Catalog for AWS resources, specifically designed to help students familiarize themselves with AWS services. While Research and Engineering Studio (RES) implementation is currently on hold due to cost considerations, the team continues to explore alternative approaches for providing AWS learning environments.
Northwestern University's Infrastructure Improvements
Northwestern implemented a slick solution using SNS and Lambda to clean up feeds from logs, along with deploying a new Terraform project. They've also enhanced their security monitoring by implementing CloudTrail log integration with Splunk.
Training and Development Initiatives
The community identified several key areas for future training:
- Cloud infrastructure for networking and security teams
- Data lake implementation and management
- FinOps training for both IT business office staff and developers
- Container migration strategies for VM teams
- SkyPilot framework implementation
Of particular interest is SkyPilot, an open-source framework from UC Berkeley's Sky Computing Lab that enables cost-effective multi-cloud management for machine learning and data science workloads. The Internet2 CLASS program is collecting these ideas and others. If you have more or if you’d like to maybe write and teach one of these topics, reach out to class[dot]internet2.edu.
Managing "Free Range" AWS Accounts
Several institutions shared successful strategies for bringing independently managed AWS accounts under central IT governance:
- Boston University implemented a service catalog approach with a 98% success rate in centralizing AWS accounts
- Texas A&M University developed an email filter system to monitor new AWS account creation
- Baylor College of Medicine established a procurement system flag to notify the cloud team of cloud-related purchases
Research Computing Solutions
The community discussed various approaches to supporting research computing needs:
- Three common researcher profiles were identified:
- Basic compute and storage needs (suitable for LightSail)
- HPC requirements
- Advanced data processing needs requiring native AWS services
- Several institutions are exploring sandbox environments and credit systems to support rapid prototyping while maintaining oversight
- The community showed interest in exploring Vocareum's AWS account deployment feature as a potential solution for sandbox environments
Looking Ahead
The community continues to evolve its approach to cloud computing, with a focus on:
- Developing more comprehensive training programs
- Improving account management strategies
- Enhancing support for research computing
- Implementing cost-effective solutions for educational environments
Be sure to check out the other blog posts we've written. As always, feel free to send any feedback to tmanik[at]internet2[dot]edu.
Sometimes our best community calls happen when we skip the formal presentations and dive straight into the real challenges everyone's facing. Since that’ how pretty much all Tech Shares work, this week's NET+ GCP Tech Share was no surprise – a practitioner-to-practitioner session that covered everything from unused project cleanup to the eternal debate over chargebacks versus showbacks.
The Great Project Cleanup Saga
The conversation kicked off with updates on tools for finding unused projects. Cynthia from UChicago shared her ongoing struggles with the Ramora tool (designed to identify folder admins) failing in their environment, requiring a Google support ticket to resolve. Meanwhile, Jon from the University of Washington had a more dramatic success story: working with Burwood, they rewrote Ramora into a function that identified 16,000 potentially unused projects out of 70,000 total. That's an impressive discovery, but now comes the hard part – verifying with absolute certainty which ones are actually safe to decommission.
Jon's suggestion that Burwood offer this as a managed service makes perfect sense. As Ethan from Carnegie Mellon noted, Google is much easier for "chicken herding" than AWS – his team finds GCP more efficient for bringing unmanaged workloads under their organizational purview.
The Chargeback Dilemma
Gabe from Penn State dropped an interesting bombshell: they're considering abandoning chargebacks entirely. With $12 million allocated for cloud projects, their finance head prefers collecting one-time annual fees from departments rather than dealing with the complexity of ongoing usage-based billing.
This sparked a fascinating discussion about different approaches. While PSU might go chargeback-free, others shared their strategies:
Jon (UW) uses ServiceNow for their chargeback process. Ethan (CMU) advocates for "showback not chargeback," creating visibility without the billing complexity. Gabe noted their SAP-based financial system and the growing autonomy for FinOps decisions.
The conversation also touched on gatekeeping strategies. PSU prohibits student projects entirely on GCP, while CMU has created a dedicated folder for student projects with built-in restrictions.
Security Command Center Aspirations
Sheila from the University of Maryland wants to better utilize Security Command Center to be more productive with security checks. Her team only provides troubleshooting support rather than managing researcher environments directly, so a tool like that could be a big help. It's that classic higher ed challenge: how do you maintain security oversight while preserving the autonomy researchers need?
The Tagging Time Bomb
Ethan floated an intriguing idea: implementing tagging policies with timestamps that automatically decommission resources if users don't "touch" the tag within a specified timeframe. It's the kind of automated governance that could prevent the project sprawl problem before it starts.
Looking Ahead
With events like Google Cloud NEXT coming up (still $299 for edu pricing until mid-February) and ongoing barn-raising plans for the Campus Engagement Coach project, the community continues to balance innovation with operational realities.
The takeaway? Every institution is wrestling with similar challenges around project governance, financial models, and security oversight. The solutions may vary, but the collaborative problem-solving approach remains constant.
How does your institution handle project cleanup and cost allocation? The community would love to hear your war stories.
