The NET+ GCP Tech Share meeting in November felt a bit like walking into a room where everyone's talking about the same thing: AI. But we got there by way of training programs, finance management, and a sneak peek at a new resource for the CICP community.
The Community is Growing
We kicked off by catching up on what's been moving in the NET+ GCP world. The CLASS program had just wrapped up Google Cloud Administrator Basic Training, and there are plans to bring it back in 2026 if funding comes through. The week before, folks had gathered for a Tech Jam focused on the decidedly practical topic of managing GCP finances. And looking ahead, the Internet2 Technology Exchange in December is bringing Google-related workshops to the table.
But then I brought up our blog posts—the ones we publish after these events. Honest question: do people actually read them? The room's answers were telling. Some people didn't even know they existed. Others had heard of them but admitted they rarely check them out. We all agreed that if these posts showed up in people's RSS readers automatically, they'd actually get read. Consider that a to-do for us for 2026.
A Tool Built for the Community
Tim took the floor to preview something he's been working on with students at the AWS Cloud Innovation Center at Cal Poly: the Internet2 Cloud Community Assistant. It's an AI tool designed to do one thing really well—help people find information about cloud infrastructure in the context of research and education. Want to know how to use GCP in higher ed? The tool searches through hundreds of webinars and resources and surfaces what you need. It's tailored specifically for the higher education context, not just generic cloud information. The idea is elegant: why build all the answers from scratch when you can make the existing community wisdom searchable?
The AI Conversation Takes Over
And then we were talking about AI for the rest of the meeting. Which, honestly, makes sense. Google is pouring resources into AI, and institutions are trying to figure out what to do about it.
Chris Daugherty from Google let us know that Gemini Enterprise is coming with educational pricing—$20 per month for faculty and super users, $5 for students and staff. But there's a catch: institutions need to work with their Field Sales Representative to get the right EDU-specific SKU to unlock that pricing. It's not automatic, which raised some eyebrows. The takeaway? This deserves a deeper conversation, so Chris is coming back in December to walk through Gemini Enterprise features and pricing in detail.
The Honest Tension
But the real conversation was about something less tangible than pricing. Universities are trying to be the trusted providers of secure, vetted tools on campus. Yet Google is showing up with tents at campus events, handing out free trials of Gemini to students directly. There's tension there—good intention on Google's part to build mindshare, but it undercuts campus efforts to manage who's using what and how. Add in compliance concerns like HIPAA, and it becomes a governance problem that no single institution can solve alone.
The consensus: we need better coordination between Google and the campuses. Google could hit the same adoption goals without sidestepping the institution's role as trusted provider. It's a workable problem if both sides show up ready to solve it together.
The Research Door is Opening
Finally, some encouraging news for research institutions. Google is partnering with universities on initiatives like DeepMind's Co-Scientist AI model—an AI tool built to assist research work. Right now it's a pilot program; Google is handing out credits to select institutions to get feedback on how to make these tools actually useful for researchers. Washington University is already in, and Google's signaling that they're open to expanding these partnerships and developing APIs to support research workflows. If your institution is interested, now's the time to raise your hand.
What Comes Next
A lot of this is still taking shape. RSS feeds for our blog posts, a deeper dive on Gemini Enterprise pricing, more schools joining research pilots—these are the things we're tracking in the coming weeks. But the big picture seems clear: AI is reshaping how universities think about cloud infrastructure, and the institutions that engage thoughtfully with vendors and with each other will be the ones that get it right.
Your university's cloud bill is likely hiding significant optimization opportunities—and the strategies to unlock them were the focus of November’s NET+ AWS Town Hall. Experts from Four Points partner Strategic Blue walked attendees through a practical playbook for identifying hidden cloud spending patterns, optimizing resource usage, and implementing a flexible commitment strategy that actually works for higher education institutions.
The session wasn't about budgets or spreadsheets—it was about transforming how universities think about cloud spending as a strategic advantage rather than a cost center.
Five Practical Ways to Stop Wasting Cloud Spend
The heart of the presentation centered on five actionable optimization strategies that university cloud teams can implement immediately. These aren't theoretical concepts; they're based on real-world interventions that have helped institutions reclaim significant portions of their cloud budgets.
The first major opportunity is identifying what experts call "zombie infrastructure"—instances running 24/7 that consume costs but deliver no value. One customer discovered massive EC2 clusters sitting idle with CPU usage at just 1%, zero network activity, and no disk operations. By implementing automated detection using AWS CloudWatch metrics and setting up intelligent alarms, universities can catch these resources before they drain budgets. The real power comes from automation: using AWS Instance Scheduler with Lambda functions to automatically shut down idle resources frees researchers and students from manual monitoring while cutting costs.
Beyond eliminating waste, there's significant value in right-sizing workloads. Over-provisioning is the number one reason cloud bills spike—teams select resource types "just to be safe" and never revisit those decisions. Newer hardware generations deliver better performance at lower costs. For example, upgrading from older instance types to the latest generation can reduce costs while simultaneously improving performance, creating a rare win-win scenario.
Storage represents another major opportunity, especially in academic environments where compliance often requires long-term data retention. Standard S3 storage costs are roughly 10 times higher than cold storage for data kept for compliance purposes. Combining intelligent tiering with S3 lifecycle policies automatically transitions aging data to cheaper storage tiers, and moving to Glacier Archive can cut storage costs dramatically. One customer moved 50% of their data to Glacier and saw substantial S3 cost reductions.
Making Commitment Strategy Work for Universities
The conversation shifted to what universities typically find most challenging: buying commitments without over-committing or leaving savings on the table. Cloud providers typically require 1- or 3-year upfront commitments with fixed terms. For universities managing research grants, evolving projects, and multiple departments, forecasting cloud usage three years in advance isn't realistic.
Strategic Blue's approach flips this model using a "laddered" commitment strategy with convertible Reserved Instances. This approach delivers 3-year discounts with just one month of commitment risk—essentially giving universities the steep discounts of long-term commitments without the lock-in risk. As usage patterns shift, the strategy adapts, meaning universities don't have to choose between inflexible savings and operational flexibility. This matters enormously for institutions balancing researcher needs with budget constraints.
The Human Element That Changes Everything
One clear differentiator throughout the session was the emphasis on dedicated FinOps support alongside automation and tools. Every customer gets access to a FinOps manager who provides ongoing guidance—not just on cost reporting, but on usage optimization decisions, budget management, and responding to anomalies in spending patterns. When a new AWS instance type launches with better performance-per-dollar, your FinOps partner alerts you. When your cloud bill spikes unexpectedly, there's a person ready to dig into what happened.
For universities managing 200+ linked accounts across researchers, departments, and initiatives, this human guidance combined with flexible commitment management removes the tedious financial administration that often falls to already-stretched cloud teams.
Real Results from Real Institutions
The presentation included concrete examples from UCSD, which has partnered with Strategic Blue for over eight years. They've moved from manual chargeback processes to accurate allocation across 200+ linked accounts, eliminated the burden of managing purchase orders and cloud credits across three providers, and achieved 17% in monthly savings. Even more compelling, they reinvested some of those savings into a university sustainability fund—demonstrating how smarter cloud spending can align with institutional values.
What This Means for NET+ AWS Customers
Higher education faces a unique cloud challenge: you need the flexibility to support diverse research projects, changing enrollment, and evolving compliance requirements, all while controlling costs in an environment where forecasting is inherently uncertain. The FinOps playbook shared in this session directly addresses that tension.
As a NET+ AWS customer, you have access to Strategic Blue's expertise through your relationship with Four Points Technology. Whether you're struggling with idle resources, over-provisioned workloads, storage costs spiraling out of control, or uncertainty about commitment strategies, the framework is proven to work. The assessment itself is free—running results typically in about four hours—and gives you concrete evidence about where your institution stands and what's possible.
Universities that implement even two or three of these strategies see meaningful budget reductions. Those that combine technical optimization with flexible commitment management and ongoing FinOps guidance often exceed 15% savings. The partnership between Four Points and Strategic Blue means you don't have to navigate these decisions alone—you have dedicated support to help maximize the value of your NET+ AWS investment.
Be sure to check out the other blog posts we've written. As always, feel free to send any feedback to netplus@internet2.edu.
Estimated reading time: 4 minutes
As November rolled in, Bob started pushing facilitation of the Tech Shares my way. I might have felt a bit of trepidation, but there was no need. This community brought lots of good things to share, so we had plenty to talk about.
So many happenings
First of all, it was a busy fall with many events and community calls going on, so we took a good bit of time just keeping up on what was happening. We talked about the third AWS Barn Raising and its associated meetings, Internet2 Technology Exchange and the GameDay there, and of course AWS re:Invent and Kevin Murakoshi’s re:Cap of that afterwards.
Speaking of re:Invent, Kevin pointed out that many important new releases actually come out before the conference. AWS simply has too many new things to cover in one week, and things that don’t make a big splash on the keynote stage—but often are nonetheless very important to working cloud practitioners—will often quietly get announced in the weeks leading up to the big show.
Fortunately, we have Kevin watching the announcements and calling out things big and small that we might want to make note of. This year, those included new CloudWatch metrics to detect I/O issues on EBS volumes, new deployment models for ECS, enrollment of OUs in Control Tower, container-level tagging for cost allocation, and support for EKS in AWS Backup, among other things.
Catch me if you can (or better yet, keep me out)
We discussed a security incident at one institution where an exposed access key in a low-spending account (typically $1.50 per day) led to a very large SageMaker and EC2 bill. Other folks chimed in with similar stories from their own experience. We commiserated on the difficulty of identifying all of the resources running in an account when responding to this sort of incident (so you can shut down things that shouldn’t be there), but we also shared approaches to preventing issues and mitigating risks. We talked about Trusted Advisor checks and using Lambda functions to detect and kill off exposed access keys. We also talked about using Service Control Policies (SCPs) that can prevent expensive resource launches (for example, disallowing the launch of very large EC2 instances).
Innovation Sandbox in action
The very first community call I attended after starting with Internet2 was our October Strategy Call about Innovation Sandbox on AWS. As such, it was really interesting to hear from the folks at Northwestern in November about the realities of their experience with it on their campus.
Currently, they have it running in production and being used by students. They mentioned that the most challenging part was the configuration of AWS Nuke, and that the CloudFormation setup took about half a day. Once that was done, however, it only took about an hour to deploy the accounts.
The setup was successful, as was an upgrade that Northwestern completed in between our two Tech Shares. The new version addressed some of the issues they had been experiencing, but there are outstanding challenges around budget management and cost visibility, as well as working with the “freezing” feature when accounts hit their budget limits. Kevin is taking that feedback back to AWS, so hopefully we will see improvements in future releases. Stay tuned!
Brave new worlds: vibe coding and Graviton
We heard from Rob at Loyola Marymount about his experience dipping his toes into the world of Vibe Coding. He has been using Kiro (which recently reached GA) to build solutions like a containerized Shibboleth deployment. It’s clearly a rapidly-developing area, and Rob mentioned new features like centralized project controls that help maintain standards, but it’s helping LMU build solutions faster than manual CloudFormation development.
While you’re thinking about Kiro to help you deploy your applications, or perhaps modernize and transform them, you may want to consider deploying Graviton-based instances to save on cost. We talked about what works well there, and the consensus is that there is a broad range of workloads that will work just fine. Web servers and interpreted languages should just move over without an issue, for example, and building multi-architecture container images gives the flexibility to deploy in both Intel and ARM environments.
It’s All About the Money
Managing cloud finances is a perennial topic, and this month was no exception. Andrew from Drexel asked about automated approaches to internal rebilling, as they are coming online with a new NET+ AWS Contract and need to be able to pass costs to the account owners. They are also looking for a common solution that would work for both their AWS and Azure usage, so options like having Four Points directly bill each account would only address part of the problem for them. We discussed third-party tools such as Kion and Flexera CCO that can help. Ultimately the process will need to be tuned to the specific needs of each institution.
Whether your institution is centrally funded or passes through cloud charges, there is opportunity for significant cloud savings if you can effectively make use of spend commitments (Reserved Instances and Savings Plans). This came up in the context of the presentation given by Strategic Blue (a partner of Four Points) on their approach to FinOps and reservation management. They offer a service to manage commitments in exchange for a portion of the savings achieved. Institutions using services like this have achieved RI coverage of over 90%, so giving up a portion of the savings to pay for it may be a win. If you already have a strong program for managing your RIs and SPs, it may not be worth it.
Estimated reading time: 4-5 minutes
If you've ever stared at a GCP billing dashboard and wondered why your $5,000 in research credits only covers five months instead of ten, you missed something important in November. The November NET+ GCP Tech Jam brought together higher education cloud practitioners from Stanford, University of Michigan, and Carnegie Mellon with Google's Public Sector team to discuss the financial complexities that keep cloud admins awake at night. What made it different: Google's engineers listened without the typical corporate polish, and more importantly, they committed to escalating these issues within Google to the people who can actually address them.
This wasn't a vendor pitch where everything gets solved by quarter-end. It was a room full of people saying "we have real problems" and Google saying "we hear you, and we're going to escalate this."
The Credit Math Nobody Warns You About
Here's a scenario that's keeping researchers across the country confused: You get a $10,000 research credit. Your project's monthly bill runs $1,000. You think you have ten months of runway. You don't. You have five.
Why? Because credits apply at the full MSRP price, but your actual costs include negotiated discounts. The moment your team starts monitoring spend, they're working with phantom numbers. For researchers and admins who aren't full-time FinOps engineers, this creates cascading confusion about project budgets, resource planning, and whether you can actually afford that next phase of research.
The room acknowledged this is an industry-wide frustration. Google's engineering team took detailed notes and committed to exploring potential approaches—whether that's adjusting how credits interact with discounts, restructuring credit values to reflect actual purchasing power, or other mechanisms. As one participant articulated it, "Give us half the credits, but make it make sense." Whether that becomes reality depends on what Google finds when they dig into it internally.
When Your Research Credits Can't Buy What You Need
Then there's the Maps API paradox. A researcher enrolls in a GCP research credit program, plans to use Google Cloud services, and discovers that Maps—which shows up in their billing console—won't accept the credits they were granted. This isn't user error. It's organizational silos inside Google. Maps operates under a separate business structure, so the money doesn't flow the same way.
This creates an ugly surprise for institutions: What you thought was comprehensive cloud enablement has a gaping hole. Google's team acknowledged this is rooted in "a function of business silos" within the company and committed to exploring whether a backend mechanism could allow credits to flow where the system currently blocks them. That exploration—and whether it leads anywhere—will take time and involve people outside this conversation.
The Support Billing Crisis Nobody Saw Coming
Perhaps the most damaging issue discussed was a recent change to how Google bills support. Until November 2024, support costs were tied to individual projects and their billing accounts. This meant costs tracked to the resource that generated them. Higher ed institutions could chargeback support to the right department, the right grant, the right funding source.
Google changed this. Now, all support costs flow to a single organizational billing account—disconnected from the projects that actually generated the support need. For institutions with multiple funding sources (federal grants, departmental budgets, foundation funding), this broke cost accounting entirely.
One university suddenly found itself billing its own organization for support on projects funded by the National Institutes of Health. The NIH was paying for the project, but the university was eating the support bill—sometimes tens of thousands of dollars for work they didn't control. As one participant reframed it, this is fundamentally "a government accounting problem," and when federal funds aren't properly tracked, auditors and compliance teams notice.
The Google engineers heard this and recognized the scope of the issue—this isn't a complaint; it's a compliance and accounting problem affecting research institutions broadly. They committed to escalating the question of whether support costs could be restructured to enable proper cost allocation. That doesn't guarantee a solution, but it means the issue will get in front of decision-makers who weren't previously aware of its impact.
Ideas on the Table, Not Promises
What emerged from the conversation was a foundation for partnership. Google's Sean Maxwell proposed a model inspired by Epic (the healthcare software giant): what if support tiers were based on organizational maturity rather than a percentage of overall spend? A team of seasoned cloud engineers would have different support needs than a team just starting out. That's value-based thinking rather than one-size-fits-all.
He also acknowledged something crucial: R1 research institutions are Google's fastest-growing segment in the public sector, yet the formal mechanisms for strategic feedback had atrophied. By the end of the call, Google committed to exploring how to formalize a customer advisory board relationship where coordinated institution feedback actually gets escalated to decision-makers. We reminded them about the existence of the NET+ GCP Service Advisory Group that is available to serve this function.
Community and Momentum
One consistent bright spot was recognition of Internet2's role in the ecosystem. Participants discussed recent workshops on GCP organizational administration and an emerging series on secure research environments. The underlying insight: you can't just hand cloud tools to universities and expect success. You need pedagogy, community, and people who understand academic research workflows.
Google also indicated interest in exploring curriculum partnerships—helping develop foundational AI and data skills training that institutions could offer to incoming researchers. These are conversations worth having, even if they take time to develop.
What This Actually Means for You
If you're managing GCP at a research institution, what matters is this: the problems you've been wrestling with—credit confusion, organizational complexity, support cost allocation—aren't unique to you or unimportant. They're systemic issues affecting a large segment of Google's customer base. And they're now documented and scheduled to be presented to people inside Google with actual decision-making authority.
That doesn't mean they'll get fixed next quarter. Google is a complex organization, and technical, financial, and policy changes take time. But the gap between "problem nobody knows about" and "problem someone at Google is actively escalating" is significant.
More importantly, you're now part of a community of institutions facing these challenges together. The conversation made one thing clear: higher education's needs in cloud infrastructure are distinctive enough to deserve dedicated attention. Whether Google can meet those needs is a different question—but at least now they're asking.
Be sure to check out the other blog posts we've written. As always, feel free to send any feedback to netplus@internet2.edu.
If you weren't part of our third NET+ AWS Barn Raising, you missed hands-on access to a solution that's quietly solving one of higher education's most pressing compliance challenges. The solution itself was developed by the AWS Cloud Innovation Center (CIC) at Arizona State University, built in partnership with the Ohio State University Library to address their urgent need for scalable, cost-effective PDF remediation. Participants walked away with a fully functional, battle-tested system deployed in their own AWS environments that is ready to demo to their institutions and tackle the April 2026 accessibility deadline head-on.
The Urgency Behind the Build
PDF accessibility isn't a new problem, but the timeline just became real. Educational institutions and state and local governments must comply with accessibility standards by April 2026, and manual remediation in that timeframe is simply not feasible. For most libraries, which house some of the largest digital content repositories on campus, the challenge is staggering: thousands, sometimes tens of thousands, of PDFs sitting in non-remediated archives with no clear path to compliance.
The traditional approach involves paying $4 to $5 per page to external vendors, which is cost prohibitive at scale. The barn raising introduced a fundamentally different approach: automation powered by AWS, Adobe APIs, and Amazon Bedrock, bringing the cost down to pennies per page. For institutions managing massive document collections, this difference isn't marginal. It takes a nearly insurmountable problem and makes the solution achievable.
Two Pathways, One Solution
One thing that made this barn raising special was its flexibility. Participants could deploy two distinct remediation pathways depending on their needs.
The PDF-to-PDF option leverages Adobe's auto-tagging capabilities to transform inaccessible documents into remediated PDFs in place. The system handles the heavy lifting automatically: auto-tagging, intelligent alt text generation for images, handling multi-page documents through intelligent chunking and merging, and comprehensive accessibility reporting. The entire workflow runs serverless and scales to handle thousands of pages in parallel.
The PDF-to-HTML option takes a different approach, using Amazon Bedrock Data Automation to convert PDFs into fully accessible HTML. This pathway works for institutions interested in repurposing content as web-native material or creating alternative formats, and for institutions for whom acquiring licensing for the Adobe APIs is a challenge.
Both options feed into a unified web interface, making it trivial for librarians or content managers to upload documents and monitor processing in real time.
From Concept to Live Demo in Hours
The barn raising itself was a masterclass in hands-on deployment. Arun Arunachalam and his ASU Cloud Innovation Center team, alongside Shashvat (an ASU student developer who helped build the solution), guided participants through a straightforward three-step deployment: backend remediation infrastructure, frontend interface, and integration setup.
Participants opened their AWS consoles, cloned a GitHub repository, and ran a deployment script. Within 10-12 minutes for the first component and faster for subsequent pieces, they had live infrastructure processing PDFs. As participants encountered friction points—regional availability constraints, service quotas, permission timeouts—the team guided people through solutions in real time. By the end, five institutions had working deployments they could take directly to their stakeholders.
When Reality Meets the Lab: Troubleshooting in Real Time
Not everything went according to plan, which is exactly what made this barn raising valuable. Real deployments across different institutional AWS accounts surfaced real-world issues that no pre-event checklist could fully anticipate.
One participant hit an Elastic IP quota limit in their non-production account, discovering they'd already used up their regional allocation. Rather than blocking progress, the team offered three solutions: request a quota increase (often auto-approved for small bumps), try a different region, or carefully clean up resources and start fresh. Another institution deployed in Ohio, only to find that Amazon Bedrock Data Automation wasn't available in that region. A quick pivot to US East 1 solved it—a lesson the team emphasized: not all AWS regions are created equal, and checking service availability upfront saves time later.
Identity Center sessions timing out mid-deployment caught one participant off guard. After an hour of other work before the barn raising, their SSO credentials expired while the deployment was running, causing permission failures downstream. The fix was straightforward once identified: refresh the session and retry. Another person accidentally triggered the UI deployment before finishing the PDF-to-HTML backend, interrupting the sequence. The team walked them through understanding which resources had actually been created and where to pick up.
These weren't failures of the solution—they were real operational constraints that institutions will face in production. The barn raising made them visible and solvable. More importantly, participants left understanding that they can diagnose and fix issues themselves. They learned to check AWS CodeBuild logs for visibility into what's actually happening during deployment, navigate CloudFormation stacks to understand resource dependencies, and use Step Functions to monitor individual remediation workflows as documents flow through the system.
Why This Matters Beyond Today
The real value wasn't just the deployment. It was what came after: institutions now have proof that they can meet the April 2026 requirement without outsourcing their content or breaking their budgets. They can test with their own PDFs, understand the remediation quality, and plan for scaling to their full archives.
The solution also isn't locked into a single workflow. Participants learned how to drop documents directly into S3 buckets for batch processing, use the web UI for on-demand remediation, or integrate with existing tools like Storage Gateway or tools like CyberDuck for seamless file access. Some institutions can route remediated HTML through CloudFront distributions to host static, accessible web content. Others can feed results back into their digital repositories.
The Next Step
If you attended, you're already ahead. You've got a working environment, you've demonstrated the solution to your team, and you understand the cost model. The team made it clear: they're here to help scale. AWS Solutions Architects can work with your institutions on production deployments, negotiating Adobe API pricing at volume, and optimizing Amazon Bedrock concurrency limits for larger batches.
If you didn't attend but your institution is managing PDFs and facing the April 2026 deadline, this solution is available now. The GitHub repositories are public, the architecture is well-documented, and the cost model is transparent. You don't need to wait for the next barn raising to get started—though learning directly from the team certainly accelerates the path forward.
The accessibility remediation challenge isn't going away. But for the first time, institutions have a clear, affordable, and scalable path forward that doesn't require choosing between compliance and budgets.
Estimated reading time: 2 minutes
I got to dive right in with the AWS Tech Shares: the first of our two October meetings fell on my 3rd day at Internet2. Bob and I were on the call from the conference room of Internet2’s Ann Arbor office where we were doing a couple of days of on-site onboarding for the new guy.
Upcoming Events
It’s a busy season for the NET+ AWS community. In between the two tech shares, we had our Strategy call with an in-depth exploration of the AWS Innovation Sandbox. At the end of the month is Educause in Nashville, which will be the venue for the in-person October meeting of the CCCG, replacing the usual Zoom call. Once that is over, we’ll turn our attention to the November Barn-Raising: PDF Remediation Solution, and then re:Invent (with Kevin’s famous re:Cap) and I2 Technology Exchange in December. We also learned this month that Cloud Forum 2026 has changed venues and will now be hosted by the University of Wisconsin, Madison next May.
Training & Mentoring
Ever on the lookout for content for the CLASS program, Bob asked for feedback on AWS training gaps for the R&E community. One idea that got general support from the group was offering refresher training and/or test prep for folks who are looking to renew existing certifications.
We also discussed the prospect of a more formalized mentoring program. This community has always been generous with its time, sharing knowledge and meeting with schools that are new to the cloud to help them get started, but it has always been done on a fairly ad-hoc basis. We talked about formalizing that more, and there was general support for building a more well-articulated program around matching new schools with ones that have been around the block a few times.
Security & Identity
We touched on a number of topics in the security and identity space this month. Michigan State is working on implementing Identity Center, Northwestern is currently evaluating options for a replacement for its current Cloud Security Posture Management (CSPM) tool. This fed discussions both of the merits of native tolls vs 3rd-party tools (especially in a multi-cloud environment), and the challenges of paying for security tools centrally when the cost of cloud usage is being distributed. We also had a discussion of various approaches to automating credential rotation (again touching on both native and 3rd-party tools).
Wrangling AWS
In the earlier meeting this month, we talked about techniques for supporting Mechanical Turk, with Kelly from UCSD sharing that they have a separate OU set up just for that purpose that has SCPs in place that limit the use of those accounts to just MT. They employ a similar approach for scoping and managing the use of AWS Marketplace.
Of course, one of the topics for our second meeting was the outage in US-East-1 two days before. Most of the folks on the call had their primary resources in other regions (being located in the Midwest or on the Pacific coast), so they saw more impact from SaaS services that their institutions use that are hosted in US-East-1.
Celebration
We’d like to offer our congratulations and appreciation to Rob from LMU. He was able to come to our call on the 22nd and share their success in migrating their Banner implementation from on-premises to AWS. This is a project that’s been several years in the making, and the final migration and cutover involved a long string of long days for the folks at LMU. So, congratulations for the successful move, and appreciation for being awake enough to join us and tell us about it!
Estimated reading time: 4 minutes
The AWS Innovation Sandbox has been discussed repeatedly in Tech Shares this year, generating the inspiration for this particular Strategy call: it was time to do a deep dive with the experts from AWS on this hot topic. In this call Todd Gruet, AWS Senior Solutions Architect and Cloud Foundations Specialist, walked through how this solution works and how it can be used to create a controlled but flexible environment for cloud exploration.
What Problem Does Innovation Sandbox Solve?
The R&E community has long needed a way to provide students and researchers with AWS accounts for learning and experimentation without losing control over costs and access. Before Innovation Sandbox, managing these sandbox accounts meant either giving up financial oversight or burdening central IT teams with constant administrative tasks.
Innovation Sandbox changes this by letting administrators pre-create a pool of AWS accounts and then delegate management to department heads, professors, or lab managers—without requiring those managers to have high-level AWS organizational access.
How It Works: The Basics
The solution revolves around three personas that reflect the delegated architecture:
Administrators deploy the solution and manage the underlying AWS account infrastructure. This typically falls to platform engineering teams or central IT.
Managers (think: professors, department heads, lab directors) control access to sandbox accounts through a user-friendly interface. They can create "lease templates" that define spending limits, time limits, and automated actions when thresholds are reached.
Users (students, researchers, developers) request sandbox accounts based on available lease templates and get immediate or approval-based access.
The magic happens through lease templates. Want to give students a one-week sandbox with a $100 budget for a class project? Create a template with those parameters. Need to provide researchers with $1,000 accounts with no time limit? That's another template. Each template can trigger different actions—sending alerts, freezing accounts, or automatically wiping and recycling accounts when limits are reached.
What is it good for?
In the call, Todd highlighted several compelling use cases:
Classroom projects: Set a one-week duration for a course assignment, then freeze accounts (rather than wipe them) so instructors can review and grade what students built.
Hackathons: Spin up 100 accounts quickly for a weekend event, then shrink the pool back down when it's over.
Research environments: Allow researchers to innovate freely but freeze accounts at spending thresholds to force conversations before accidentally deleting valuable work (Todd used the "zebra unicorn" metaphor to describe someone's breakthrough discovery that you don’t want to lose just because they hit a budget limit).
What does it give you?
Cost visibility for managers: One of the most praised features is that managers can see near real-time spending and duration for all active accounts without needing access to the organization's billing console—something department heads and professors typically don't have.
Automatic cleanup: When an account's time or budget runs out, AWS Nuke automatically cleans up all resources and returns the account to the available pool. This recycling capability means you can reuse accounts rather than constantly creating new ones.
Flexible controls: You can set maximum budgets at the organizational level and let managers create templates within those bounds. You might limit users to one sandbox at a time to prevent sprawl, or allow multiple accounts based on your needs.
Coming soon: The next release (expected within two months) will add the ability to invite multiple users to a single sandbox account—a feature specifically requested by universities—and improved cost reporting by department or cost center.
What’s the catch?
Innovation Sandbox does have some limitations and caveats:
Prerequisites matter: The solution requires IAM Identity Center as a hard prerequisite. You'll also need someone with access to the management account and AWS Organizations to deploy it.
Billing delays: Cost data comes from AWS Cost Explorer, which updates once or twice per 24 hours. This means spending could continue for up to 23 hours after crossing a threshold before the system responds. You can mitigate the risks of the delayed response by setting freeze thresholds below your actual limits and using Service Control Policies (SCPs) to prevent expensive resource types from being created in the first place.
Resource coverage: The cleanup process uses AWS Nuke, which supports most but not all AWS resource types. The solution prevents creation of unsupported resources through SCPs, and if cleanup fails for any reason, the account moves to a quarantine status for manual review.
Cost allocation complexity: In the current version, aggregating costs by department requires access to DynamoDB tables in the Hub account—typically a central IT function. The upcoming release will improve this with built-in cost reporting, though it won't use AWS cost allocation tags (so it operates separately from traditional chargeback systems).
Cost Management Best Practices
Todd shared several recommendations from working with dozens of institutions:
- Even if you don't want to automatically wipe accounts, set a high maximum budget threshold as a safety net against unexpected cost overruns
- Use SCPs to deny creation of large or expensive instance types while still allowing smaller instances that meet learning objectives
- Consider using freeze actions rather than immediate account wipes to force users to take action before losing their work
- Send multiple alert thresholds, but don't rely on users responding to alerts alone—that's where freeze actions become valuable
Getting Started
If you want to dive into the details, a recording of the presentation along with the transcript are available in here, in Google Drive. The solution is free to use (though you still pay for the AWS resources consumed) and deploys via CloudFormation. AWS provides a comprehensive implementation guide, and upgrades are typically just template updates. Several schools in the CICP community have deployed it, so there is knowledge to be tapped among your peers if you need help.
Estimated reading time: 2 minutes
A New Face at I2
This was the first GCP tech share since I joined Internet2 on October 6 to become the Program Manager for NET+ GCP and NET+ AWS, so Bob introduced me. I’ll be taking over the GCP program from Bob while he continues to shepherd CICP and CLASS.
A question about training gaps for R&E in the GCP space led to several discussions that were less about training gaps and more about knowledge gaps.
Institutional Onboarding
We talked about institutional onboarding and the need to document how users get access to GCP at a particular institution. Bob mentioned the Higher Ed GCP Adoption Guide, which hasn’t been updated in a couple of years, but provides a broad set of guidelines for institutions bootstrapping their GCP support program.
Sorting out AI platforms
We did talk about training for new AI platforms like Gemini, because researchers often come with information that hasn't been communicated to IT. Part of the challenge there is even knowing which products fall under which contract or support structure (since both GCP and GWE have AI offerings). Internet2 is working with Google on a coordinated program to help schools with Gemini, with details to be announced soon
Managing Credits
We discussed the challenges of managing credits, especially in an environment where project creation has been restricted. Bob shared Indiana University's approach of creating folders for specific classes and giving faculty members privileges to allow students to create projects. Doug (Burwood) showed a diagram of their recommended folder structure with a learning folder that has full rights for teaching and learning credits. We discussed the need to make sure that project creation permissions don't allow students to see each other's projects unless specifically configured that way.
Creating dedicated folders with appropriate permissions for classes was recommended because faculty can be given control over Google Groups containing their students and projects can be deleted at the end of the semester by removing the folder.
Firebase
The complexity of Firebase contract terms creates challenges for institutions trying to ensure compliance. This came up in the context of concerns about Firebase and other Google services that may not be fully covered by GCP contracts. Chris D (Google) shared a link to Firebase terms of service, acknowledging it is probably the most complicated service, with different terms for different components.
Research Computing and Cluster Toolkit
In response to talk of ODU’s focused on moving research workloads to Google, Ethan (CMU) described using Cluster Toolkit for rapid deployment of Slurm clusters. We talked about managing containers for both on-premises and cloud-based clusters, and where Cluster Toolkit was strongest (dedicated clusters for single targeted workloads) and not as cost-effective as an on-prem solution (large, centralized clusters). We also touched on whether people are connecting on-prem resources to cloud to make hybrid clusters.
Cloud Skills Boost
We also discussed Cloud Skills Boost as an excellent resource for providing campus users with GCP training. Your Google sales rep can send you an invite to become the Cloud Skills Boost administrator, which will then allow you to make it available to your campus. Institutions get 500 licenses by default, but licenses can be added as needed.
That was it for this time around. We hope to see folks at our October Strategy call next week, the Google Organizational Admin training on November 13th and at Tech Share next month!
Estimated reading time: 2 minutes
The one NET+ AWS Tech Share meeting I missed, and you all talked about the one topic I lead a community practice for: AWS Landing Zone Accelerator (LZA)… Talk about FOMO… Despite missing the September 10th meeting, thankfully, I was able to join the one on the 24th. I would say that the discussions about Innovation Sandbox and OpenTofu healed my FOMO. Some would say I'm coping, and you may be right, but it really was a cool conversation. More on that later in this blog.
The LZA Conversation Continues (without me…)
Kudos to Jon from UW for kicking off a great discussion about LZA. Others on the call were also interested to know more about the tool and who's using it. I'm glad Kevin (AWS) and Tim (UMBC) were there to field questions. Shout out to Tim (UMBC, not me) for pointing the folks on the call to the LZA Community of Practice calls that Kevin and I host every month. If you would like to join a team of AWS experts, LZA veterans, and LZA-curious folks, contact me for the details tmanik[at]internet2[dot]edu.
Migrations, migrations, migrations
Another cool topic (that I missed): migrations. A couple of schools are currently undergoing on-prem to cloud migrations and there seems to be a common theme around it. Can you guess what that is? Hint: It starts with VM and ends with ware. That's right. Our "favorite" tool: VMware. I'm starting to think we should do a check-in with our friends who are a part of these migrations. It's always very fascinating to hear them discuss the nuanced technical and non-technical challenges that their team and institution faces. Definitely a lot of wisdom gained from hearing their stories.
Innovation Sandbox Takes Center Stage
Now this discussion I actually made it to. Dan (Northwestern) shared how they're using AWS Innovation Sandbox for one of their Master's programs - about 50 students doing AI related capstones. Chi (URI) also uses it similarly for researchers wanting to experiment without the overhead. This might be the fourth or fifth time we talked about Innovation Sandbox in a Tech Share. I think we might want to consider a dedicated session for this. If you think so, send me a message.
The OpenTofu Migration
Another interesting thing that got brought up is OpenTofu.
Quick primer: OpenTofu is an open source, cloud agnostic Infrastructure as Code tool i.e. like Terraform but OpenSource.
Anyways, Rob (LMU) has been looking at migrating to OpenTofu. Luckily, Dan was there to share that his dev group is smoothly migrating from Terraform to OpenTofu using the migration assistant. There was enough interest that we might see a dedicated session on this. Like always, let me know if you're very adamant about getting this to happen. Even better if you have been in the trenches with OpenTofu. The community would love to hear from you.
Research IT Consolidation Trend
To close out this month's meeting, Jan (AWS) brought up an interesting pattern she's seeing which is that several institutions are consolidating research IT under central IT. Jon said UW's situation is similar, though with a twist: since their cloud spend mainly comes from researchers, their cloud offerings naturally fell under research computing over time. If this sounds like your institution then know you're not alone. Most importantly, if you have some thoughts to share about this then I will promise you the floor at the next NET+ AWS Tech Share.
Estimated reading time: 2 minutes
What do Looker, Cloud Skills Boost, and Vertex AI have in common? Nothing much really. I just thought it would grab your attention. Though they are all GCP offerings that we discussed at our September NET+ GCP Tech Share. So I guess that wasn’t completely click bait after all...
Managing Looker Dashboard Ownership
Okay so I have been trying to work out this issue for a while so I was very eager to get on this Tech Share call. Here’s the situation: when someone creates a Looker dashboard, they become the owner, however, when they leave the company… poof… the dashboard is gone! That is what happened to my team, and it was not a pleasant experience.
John (WashU) chimed in and said that his team uses dedicated admin accounts rather than personal accounts to create dashboards. This should prevent the scramble to transfer ownership when staff transitions occur. This is a good strategy, however, my team decided to find other alternatives due to operational management concerns around admin accounts.
Now you may be asking yourself, Tim, couldn't you just use Google groups as owners for those dashboards? You would be the Google thing to do, but unfortunately, for some odd reason, Looker dashboards could not be “owned” by a Google group (when my team tried it). Jeff (Googler) did say that it should be possible so I’ll reach out to him and keep you posted in a future blog.
Cloud Skills Boost Adoption
Changing gears to training, Bob (Internet2) asked the group who among them were using Cloud Skills Boost (CSB). Almost 50% of the attendees said yes, including myself. In my opinion, CSB is one of if not the best learning platform from any cloud service provider. The platform's hands-on labs provide practical scenarios for learning services like Cloud Run, GKE, Vertex, and more. If you’re an engineer, you would know that the doing is what makes the concepts stick. I can tell you that what I learned through CSB has been fundamental in becoming comfortable in the GCP console.
Jon (UW) noted that the self-sign-up link for CSB has transformed the platform's usability. This is definitely a game changer for lean teams concerned about management overhead. If that sounds like you then check out this feature of CSB. You might just be one link away from upskilling those at your institution.
Vertex AI RAG Engine's Hidden Costs
We come to you with a tale as old as time: a new cloud service with hidden costs that ends up piling up a hefty bill the moment you peek under the covers...
Kenny (UMich) discovered that Google's Vertex AI RAG Engine, which went GA on September 3rd, automatically deploys a Spanner database costing approximately $1,000 per month. Ezequiel (UCF) said he saw the same thing across his two projects. Luckily, Google was gracious enough to provide some credits to Kenny and his team. So if you’re reading this, stay alert, this could happen to you… but hopefully it won't, because you read this blog, and you’ve got the intel that others don’t have :)
And that's it. That's a wrap for this month's NET+ GCP Tech Share. We collected some real nuggets of wisdom along the way, and I'm already looking forward to the next round of tips and stories. Hope to see you at next month's get-together!
Tech Jams are an opportunity for schools facing a specific technical challenge to sit down with an expert from the cloud service provider and talk through their options. The rest of us watch, tossing in the occasional question. Everyone learns. This Tech Jam was a textbook example of how they should work.
When BigQuery Bills Get Scary
We've all been there. That moment when you open your cloud billing dashboard and wonder if someone accidentally launched a cryptocurrency mining operation on your dime. Today’s NET+ GCP Tech Jam focused on precisely this scenario—how to architect solutions that keep your cloud costs from turning into the stuff of finance office nightmares.
We started out in the world of BigQuery reservations and cost management strategies. UMich’s Kenny Moore kicked us off with a story of a BQ/Looker application, the costs of which exploded when it was exposed to the public who promptly started scraping the data. Clearly, optimization or protection was called for. Luckily, we were joined by Aaron Pestel, Data Specialist at Google Cloud. Aaron walked us through the somewhat counterintuitive reality that reservations often cost less than on-demand pricing, even when you're not sure about your usage patterns.
The BigQuery Revelation
Here's where things got interesting. Aaron's recommendation to set baseline slots to zero with a maximum of 2,000 slots essentially gives you on-demand performance at reservation prices—kind of like getting the premium cable package at basic cable rates. For most educational institutions juggling multiple projects and unpredictable research workloads, this approach could be a game-changer.
The discussion around organizational-level reservations particularly resonated. While some institutions prefer project-level control to match their grant-based funding models, others are finding that folder-based billing structures provide the right balance of oversight and flexibility. It's the eternal higher education challenge: how do you maintain institutional oversight while preserving the academic freedom that makes research possible?
Aaron has a blog post that takes you through these configuration options and spells out three simple best practices. It is a revelation.
The Vertex AI Reality Check
The second pain point we wanted to hit was Vertex AI's RAG Engine and its Spanner database backend. Lucrecia Kim-Boswell (Stanford) and Ethan Connor (CMU) related stories of unexpected explosions of cost when the RAG Engine’s database started racking up bills when the service went GA. For most of our community’s projects that would leverage the RAG Engine, using Spanner is massive overkill. It’s an impressive database platform, but at about $1k/month to just turn it on, it’s not the right tool for the job. Our steadfast Google technical partner, Jeff Nessen, had invited a Vertex AI specialist to tackle this issue, but he couldn’t make it. Never daunted, Jeff took it on. His frank assessment echoed what many of us have discovered: the promise of managed AI services often comes with price tags that make even well-funded institutions wince. The conversation about alternatives like Cloud SQL or custom RAG implementations outside Vertex AI highlighted the ongoing tension between convenience and cost in the cloud.
This is where the higher education context becomes crucial. We need managed services that understand our unique constraints—limited budgets, diverse user bases, the fact that many cloud teams are, in essence, local resellers, and the need for both innovation and fiscal responsibility.
The Practical Takeaways
Beyond the technical recommendations (budget dashboards, quota management, reservation strategies), the most valuable part of this Tech Jam was the community problem-solving. Kenny's experience with researchers' scraped data resulting in unexpected bills is probably familiar to anyone managing cloud infrastructure in an academic environment.
The upcoming Google Cloud Administrator training on November 13th ($50 for non-CICP schools, free for CICP members) represents exactly the kind of practical investment that can prevent these costly surprises. Sometimes the best cost management strategy is simply understanding the tools you're using.
As we continue building out cloud capabilities in higher education, sessions like this month’s Tech Jam remind us that successful cloud adoption isn't just about technical architecture—it's about creating sustainable financial models that support innovation without breaking institutional budgets.
Neither our budgets nor our time is limitless. That’s why Tech Jams and other sessions organized by the NET+ GCP team can help expand your workforce, your knowledge, and your effectiveness. We hope to see you at our next NET+ GCP event!
By Bob Flynn, Internet2 Senior Program Manager
Estimated reading time: 4 minutes
The NET+ AWS community got a recap of the latest announcements from AWS's New York AI Summit, courtesy of Abhilash Nagilla and Fernando Ibanez. If you missed the session (or got lost in the alphabet soup of service names), here's what you need to know about the developments that might actually matter for your institution.
The Data Foundation Reality Check
AWS continues its march toward making data less painful to work with. The SageMaker Unified Studio updates address something we all know but rarely talk about: most of our AI initiatives fail because our data foundations are terrible. The new streamlined onboarding from S3 and Redshift isn't just another feature – it's acknowledgment that getting existing assets into a usable state shouldn't require a PhD in data engineering.
The unstructured data support particularly caught my attention. Higher education drowns in unstructured data – from student papers and research documents to video lectures and administrative records. The ability to automatically convert S3 data into queryable insights while combining it with structured SQL tables could actually bridge the gap between what we have and what our AI projects need.
Storage Economics That Make Sense
S3 Vectors deserves special attention, not for its technical specs but for its economic implications. A 90% reduction in vector storage and query costs isn't just a nice-to-have – it's potentially the difference between piloting AI applications and actually deploying them at institutional scale.
The tiered storage strategy Fernando mentioned makes practical sense: use cost-effective S3 Vectors for your "parking data analysis" use cases, while keeping high-priority student recommendations on OpenSearch for real-time performance. Someone gets that not every query needs millisecond response times.
The Agent Revolution (With Guardrails)
The Agent Core announcement represents AWS's bet that we're moving beyond simple chatbots into full workflow automation. The progression from assistants to agents to agentic systems isn't just marketing speak – it reflects how organizations are actually trying to use AI.
What impressed me most wasn't the technical capabilities but the enterprise focus. Agent isolation, comprehensive monitoring, and the ability to use your existing security providers (Okta, Azure AD, Shib?) suggest AWS learned from early AI deployments that went sideways due to security and governance gaps.
The Agent Core Gateway's semantic search capability addresses a real problem: tool sprawl. When you have 300 available tools and your agent needs to pick the right four, both cost and accuracy depend on smart filtering. The JSON description token economics Fernando outlined aren't theoretical – they're the difference between sustainable AI operations and budget-busting input costs.
The Practical Reality
Here's what I walked away thinking: AWS is betting that the future of AI isn't about having the smartest models, but about having the most comprehensive deployment and management infrastructure. That’s consistent with their usual MO. Agent Core Runtime's pay-per-execution model and the Code Interpreter's isolated environment show they understand the operational challenges that follow successful pilots.
For higher education, the use cases Fernando mentioned – transcript processing, 24/7 help desk systems, and student risk identification – aren't moonshots. They're the kind of incremental automation that could actually improve daily operations without requiring institutional transformation.
The Catch
As always with AWS announcements, the devil's in the implementation details and pricing models that aren't fully available yet. Agent Core is still emerging, and anyone who's lived through early AWS service launches knows that "available" and "production-ready" can be very different things.
But the direction is clear: AWS is building the infrastructure to make AI deployment less about custom engineering and more about configuration and integration. Whether that vision matches your institution's reality depends entirely on how well your current data foundations align with what these tools expect.
Resources
The session recording can be found on the CICP Calendar page.
Core Services from the Blog Post:
Amazon SageMaker Unified Studio:
- Main service page:
- Documentation:
- Getting started:
Amazon S3 Vectors:
- Product page:
- Documentation:
- Getting started tutorial:
- Blog announcement:
Amazon Bedrock:
- Main service page:
- Documentation:
- Getting started:
Amazon Bedrock Agent Core:
- Service page:
Kiro IDE:
- Official website:
- Downloads:
- Introduction blog:
- GitHub repository:
Amazon OpenSearch Service:
- Main service page:
- Documentation:
- Getting started:
- Resources:
Strands Agents SDK:
- Official documentation:
- GitHub repository:
- AWS blog announcement:
- AWS Prescriptive Guidance:
Estimated reading time: 2 minutes
The August NET+ AWS Tech Share sessions brought together cloud innovators tackling the most pressing challenges in higher education today. From post-Broadcom VMware migrations to classroom AI deployment, participants shared strategies that could transform your institution's approach to cloud services.
The Great VMware Migration
As Broadcom's acquisition reshapes the VMware landscape, institutions shared surprisingly diverse migration strategies. William & Mary is nearly VMware-free—but their approach revealed unexpected modernization opportunities that Phil promised to detail in future sessions. UC Berkeley moved 150 of 350 VMs to AWS while transitioning others to Nutanix, though Chi hinted at some "interesting surprises" they encountered along the way.
Migration veterans revealed challenges that caught many off-guard: email relays with hidden dependencies, LDAP services that broke in unexpected ways, and what Kevin called "the networking gotchas nobody warns you about." These war stories could save your team months of troubleshooting.
Image is generated by Gemini
AI Tools Faculty Actually Want
Faculty across institutions are clamoring for custom AI agents, but here's what they're not telling you in vendor demos: implementation is trickier than expected. UC Berkeley's LibreChat pilot with 200 users revealed some surprising adoption patterns, while UMBC's success with Google's Notebook LM in 300-student classes came with lessons UMBC said "you really need to hear before you deploy."
The announcement of Claude for Education via AWS Marketplace generated particular interest—Kevin promised to share pricing details that could significantly impact your AI strategy planning.
Community Innovation Through Barn Raising
The barn raising concept is producing unexpected results. Tim (UMBC) deployed the IU Transcription Tool in just 20 minutes, but the real story is how his library is using it in ways the developers never imagined. The upcoming Adobe PDF remediation barn raising has numerous schools interested, though the Adobe licensing discussion revealed a potential workaround that had several participants furiously taking notes.
Cost Control Secrets
Northwestern's Matthew shared an approach to cost prevention that had several participants asking, "Why didn't we think of that?" Using Service Control Policies to block expensive operations seems obvious—until you hear about the specific AWS policy gaps his team uncovered. Matthew promised to share their full SCP framework after rollout, including patterns that could prevent those Friday afternoon "oops" moments we've all experienced.
The Tool Everyone Wanted Access To
Rob's Kiro IDE story stole the show. His auto-tagging web app project—completed in a week versus the months it would traditionally take—had participants immediately asking about waitlist access. But Rob's candid discussion about "where Kiro struggles" provided insights you won't find in AWS marketing materials. His offer to do a detailed show-and-tell has us already marking calendars.
Penn State's Wake-Up Call
Shane's AWS Connect story took an unexpected turn. While exceeding their 85% migration goal was impressive, the real validation came during a regional 911 outage. As Shane put it, "When traditional lines went down and ours stayed up, even our skeptics became believers." The implementation details he shared could reshape how you think about campus communication resilience.
Your Next Move
These sessions consistently deliver insights you can't get from documentation or vendor presentations. Here's what's coming:
Don't Miss:
- Northwestern's SCP framework release
- Kiro IDE show-and-tell
Lock In These Dates:
- AWS re:Invent: December 1-5 (get discount codes from your account manager)
- Internet2 TechEX: December, Denver (We’re having another AWS Gameday!)
The conversations that happen between agenda items often contain the most valuable insights. As one participant noted, "I learn more in the chat during these sessions than in most formal training."
Be sure to check out the other blog posts we've written. As always, feel free to send any feedback to tmanik[at]internet2[dot]edu.
A few months back, I found myself doing what I do best – bombarding Christian Michael from Google with questions. This time, it wasn't about hyperscale mysteries or cloud architecture philosophies, but about something far more practical: what exactly do you do with Google's Cloud Skills Boost for Organizations licenses once you have them?
If you're like me, you probably received word that your institution now has access to free Cloud Skills Boost licenses and thought, "Great! Now what?" The good news is that Christian was gracious enough to walk our community through the ins and outs of this platform during our recent webinar, and I'm here to share the highlights.
The "No Instruction Manual" Problem
When Google made Cloud Skills Boost licenses available to higher education at no cost, it was genuinely exciting news. But as I pointed out to Christian, "it did not come with an instruction book." That's where today’s community conversation came in – to fill the gaps between getting access and actually making it useful for your teams.
The platform isn't just about individual learning (though it certainly supports that). It's designed as a full-service LMS system that lets you provision up to 500 users initially (with more available upon request), track team progress, create custom learning paths, and even assign deadlines. Think of it as your institution's cloud training command center.
Beyond the Basics: What Makes This Different
What struck me most during Christian's demonstration was how the platform addresses different learning styles and technical proficiency levels. Whether you have someone completely new to cloud concepts or a certified AWS professional looking to cross-train on Google Cloud, there are accelerated paths that meet people where they are.
The hands-on lab environments are particularly compelling. These aren't just video courses – they're controlled sandbox environments where your team can practice without breaking anything in your production systems. From 100-level "copy and paste this SQL code" exercises to 400-level skill challenges that remove the training wheels entirely.
The Administrative Reality Check
Tim Manik asked the question we were all thinking: "What is the admin overhead for setting all this up?" The answer was refreshingly straightforward. Once you've been designated as an administrator by your Google account manager (and yes, Google can help if your original admin leaves), adding users is as simple as uploading a spreadsheet of email addresses or creating invitation links for specific domains or teams.
Users have to opt-in by accepting invitations, so you're not adding people without their permission. The platform also supports multiple teams within your organization – perfect for those of us juggling research computing, central IT, and various college-level groups that all need different training approaches.
The Community Angle
What really resonated with me was Christian's suggestion for handling cross-institutional training scenarios. While you can't easily mix users from different organizations in the same learning paths, you can all follow the same learning path and then leverage public profiles to create visibility across institutional boundaries. It's not perfect, but it's workable – and it maintains the organizational structure that makes sense for licensing and data ownership.
The Bottom Line
Here's what I walked away with: Cloud Skills Boost for Organizations is a great no-cost opportunity for upskilling teams in an era where, as Christian pointed out, technical skills have a shelf life of about two and a half years instead of the six we used to enjoy.
If your institution doesn't have access yet, reach out to Christian Michael at cjmichael@google.com. If you do have access but haven't deployed it effectively, it's worth the conversation to understand what you're missing.
After all, digital transformation is enabled by technology, but it's powered by people. And right now, those people need all the help they can get staying current with tools that seem to evolve weekly.
The recording and materials from our session are available on the Internet2 Cloud Infrastructure Community Program calendar.
As always, feel free to reach out with questions – preferably ones I can answer without another round of Christian-bombardment.
Estimated reading time: 2 minutes
The July NET+ AWS Tech Share sessions brought together the community to tackle pressing challenges around AWS's new free tier model, root account management, and the ever-present VMware migration deadlines. If you missed these discussions, you missed practical solutions that could save your institution time and money.
AWS's New Free Tier Model: What It Means for Education
AWS unveiled a significant shift in their free tier structure that generated immediate interest. The new model provides $200 in credits over six months—$100 upfront plus another $100 after completing specific checklist items. This approach mirrors Google Cloud's model, automatically stopping services when credits are exhausted rather than surprising users with unexpected bills.
The catch? These benefits don't extend to organizational accounts. As Bill confirmed during the discussion, institutions using AWS Organizations won't see these free tier advantages—a crucial detail for anyone planning account structures. The conversation that followed revealed several workarounds that participants are testing, though we'll need to wait for next month's results.
Breaking the Root Access Bottleneck
Stanford shared their bold experiment: completely shutting down root account access for linked accounts. What seemed risky on paper proved surprisingly manageable in practice. David from another institution confirmed their success with delegated admin implementation, noting that root access is really only essential for annual password rotations.
The real value came from the troubleshooting discussion—participants shared specific scenarios where they'd been burned and how delegated admin saved the day. This shift represents a significant security improvement while maintaining operational flexibility.
The Billing Automation Spectrum
The conversation revealed a fascinating spectrum of billing approaches across institutions. While Shruthi described a manual process using Workday tags and spreadsheets, Lucrecia showcased a fully automated workflow leveraging Oracle EAM and ServiceNow integration.
Here's where it got interesting: Lucrecia and Shruthi are scheduling a follow-up to walk through the Oracle EAM implementation details. If automated billing has been on your wishlist, you'll want to catch the outcomes of that collaboration in future sessions.
VMware Migration Reality Check
With mid-August deadlines looming, Phil's VMware migration update struck a nerve. Rob's experience with the MAP Lite program offered this gem: "We lost $15,000 in migration credits because our tags weren't properly configured from day one." The room went quiet—then erupted with questions about proper tagging strategies.
Rob promised to share his tagging template at the next meeting, along with lessons learned from their Banner migration that "nobody warns you about in the official documentation."
Innovation Spotlight: From AI to Automation
The second July session showcased developments that had participants frantically taking notes. Tommy from AWS introduced AgentCore, Bedrock's new agent capability, alongside S3's vector component support. One participant mentioned this could eliminate their $30,000 annual database costs—naturally, everyone wanted details.
Penn State's Shane demonstrated their Kion-based account provisioning system, now fully API-driven and integrated with ServiceNow. When he mentioned their provisioning time dropped from three days to seven minutes, the chat exploded with questions. Shane's offering to do a deep-dive demo at an upcoming session for those interested in the technical implementation.
The AI Tool Debate Continues
The LibreChat versus Open WebUI discussion revealed institutions taking vastly different approaches to generative AI. Sam from UVA posed questions that highlighted gaps in everyone's understanding of these tools' security implications—questions that deserve dedicated time in future sessions.
Next Steps and Resources
Several participants are scheduling follow-up conversations to share implementation details that couldn't fit in our time together.
Mark your calendars for the upcoming AWS Imagine conference and Internet2’s Tech Exchange—and don't miss August's Tech Share where we'll hear results from this month's experiments.
Be sure to check out the other blog posts we've written. As always, feel free to send any feedback to tmanik[at]internet2[dot]edu.
