Blog from May, 2018

MidPoint configuration is maintained in XML files by default. This post describes how Oregon State University is managing our midPoint resource configuration files by leveraging git for storage and change tracking, and Eclipse as an IDE for editing the configuration and applying it to the midPoint server.

Assumptions

  • You are running midPoint 3.6+
  • You are familiar with the basic operation of git
  • You have git and Eclipse installed

midPoint Eclipse Plugin

  1. Install the midPoint Eclipse Plugin by following Evolveum's Eclipse Plugin Installation HOWTO.
  2. Configure the plugin to connect to your midPoint server. See the "Configuring Connections" section in Evolveum's Eclipse Plugin HOWTO.
  3. Create a new project in Eclipse with a type of "General". This project will contain the midPoint configuration files. In this guide, our project is called "midpoint-config-xml".

Git Repository Layout

The git repository layout for our midPoint configuration files follows the folder structure created by the midPoint Eclipse plugin. Here is an example repo layout:

.
├── README.md
└── objects
    └── resources
        ├── GYBONID.xml
        └── ONIDLDAPDEV.xml

GYBONID and ONIDLDAPDEV are midPoint resources, and their configurations are maintained in the corresponding XML files.

Tutorial: Adding a New Resource

In this tutorial, you will add a new resource in midPoint for Google Apps, use Eclipse to make changes to its configuration, and use git to store those changes.

Suppose you have a midPoint configuration project in Eclipse that points to files stored in the following git repo:

midpoint-config-xml
└── objects
    └── resources
        ├── GYBONID.xml
        └── ONIDLDAPDEV.xml

Step 1. Download sample resource XML

Most midPoint connectors provide a sample resource XML file to use as a configuration starting point. Click here to download the Google Apps sample XML.

Step 2. Import the sample resource XML file into midPoint

In midPoint, use the "Import object" tool to import the XML file you downloaded in step 1.

This will add a new "Google Apps" resource in the Resources list.

3. Download the resource configuration from the server

In Eclipse, right-click on the "midpoint-config-xml" project name in Package Explorer and use the midPoint plugin to "Browse server objects".

Select "Resource" in the object types list, then click Search.

Select the "Google Apps" resource and click Download.

4. Rename the downloaded XML file (optional)

We recommend that you rename the downloaded file to something that better reflects your organization's resources. In this example, our test Google Apps instance is called GTest, so we've renamed the configuration file to GTEST.xml.

5. Add the new resource configuration XML to the git repo


$ git add GTEST.xml
$ tree
midpoint-config-xml
└── objects
    └── resources
        ├── GTEST.xml
        ├── GYBONID.xml
        └── ONIDLDAPDEV.xml

6. Modify the resource

Now you can start making configuration changes to the resource XML in Eclipse. For example, change the domain to something other than the default.

Save the file. Depending on your workflow, you may want to test your changes before committing them to git.

7. Upload the configuration change to midPoint

In Eclipse, select the modified XML file in the project explorer, right-click on it, select "Server-side actions" from the midPoint plugin menu, and then select "Upload, test, and validate".

If there are no errors, the new configuration will be applied to the resource in midPoint.


Stacy Brock, Oregon State University

As background, Georgia Tech's Information Technology Group (ITG) has been working on a project to integrate Grouper with their Door Control system. They have utilized Georgia Tech's Identity and Access Management (IAM) department's internal ESB, BuzzAPI, as a proxy to Grouper's Web Services. Through BuzzAPI, ITG can maintain special Door Control Grouper groups that leverage reference groups sourced from GT's LDAP (GTED). ITG's Door Control groups are then provisioned back to GTED as entitlements. ITG's Door Control software can then read ldap entitlements to determine a person's access to a door. ITG has also built UI's for viewing a person's Grouper memberships as well as for allowing admins to create memberships through their custom UI.

A problem has surfaced recently in ITG's UI when creating memberships. They would like to be able to add multiple people to groups at once and at the same time add a group as a requirement, or condition, of membership. To make the selection of the conditional group easier, they are using Grouper Web Services (proxied through BuzzAPI) to find all the groups that the selected population have in common. The goal is to then present the intersection of groups as a selection set for the admin user to choose as the conditional group(s).

There have been a couple hurdles to this process that will require some thought. Firstly, ITG is using Grouper Lite Web Services which can only be called for one person at a time in order to retrieve their memberships. When you have many people's memberships that you are trying to retrieve, store, and compare, the resulting response time isn't desirable. Secondly, the memberships that would be used as conditions, like affiliation with a given department, are sourced from LDAP. An idea was floated to first query these affiliations from LDAP for efficiency's sake and then translate the LDAP affilations to Grouper group names. The problem therein lies with the fact that our LDAP affiliation names do not intuitively match their corresponding Grouper group name which makes it difficult to build a successful Web Services call to create the conditional membership.

To get around these problems, there are multiple solutions that we are looking into. The most attractive option may be to use Grouper's Batch Web Services that allow for multiple subjects to be queried at once. This may create efficiencies when trying to retrieve all the common memberships in Grouper for a given selection of people. The other option would be to store the exact LDAP affiliation name in Grouper alongside the resulting Grouper affilation group. This is already being done in the Grouper group description for these affiliation groups, but we might be able to make it more visible by storing it in a custom attribute or somewhere where it could be queried more easily.

The GT IAM and ITG teams will continue to look into good solutions to this problem of finding membership intersections for large groups of people. There may be a much easier way to do this that we haven't discovered. Please feel free to leave a comment if you have encountered similar issues and ended up solving them. We'd love to hear from you.

Today UMBC runs a small local Grouper implementation.  Like other instutitions we initially struggled with installation, group naming, folder hierarchies, etc.  Then we found the TIER Grouper Deployment Guide and the TIER Folder and Group Design section.  It has helped us come up with a consistent naming system.  We started prior to the publishing of these document so the initial implementation was a lot of trial and error.  Then TIER was released and later we became part of the Campus Success Program (CSP).  

As part of the CSP we began testing the Grouper Docker container.  The first few container deliverables were a work in progress. The latest unified container delivers on the promise of a functioning container that can run Grouper with minimal configuration. No more struggles setting up new servers.  Recently we experienced an issue with a production server. In a matter of a few hours I was able to configure and start a production server.  While somewhat a head of our intended schedule we are now running production containers for the ui-ws and daemon. This is just the start of our journey with containers.  Orchestration and cloud services will hopefully follow, allowing more time to utilize additional Groupers functionality.

Lafayette College is a long-time operator of a locally-run Shibboleth Identity Provider (IdP). When it came time to develop a web portal for the College and configure it and other services for Web SSO, we had to think about what we wanted our Web SSO behavior to look like. Providing a robust user experience was important, so we made the decision to make CAS central to our Web SSO strategy.

We integrated the shib-cas-authenticator plugin with our IdP for its capability of delegating authentication externally from Shibboleth to CAS, which serves as our SSO front end. This bridge between Shibboleth and CAS is a key piece of our authentication architecture. But when we became aware that being able to log into the InCommon Certificate Service using SSO would require supporting the REFEDS MFA Profile, we didn’t know how the bridge would handle MFA signaling. Our interest in Internet2’s TIER Initiative raised another question: could our customizations be added to the IdP packaging?

Though Lafayette couldn’t attend the first TIER CSP F2F in person, we were able to work with the TIER SMEs remotely to get an idea of how this would be possible. Our engagement with Unicon for the Campus Success Program included helping us deploy the IdP package and incorporate our requirements for MFA signaling and the shib-cas-authenticator. They put together a beta release, shib-cas-authn3, that was able to handle the REFEDS MFA Profile. That solved one of our problems.

But what about adding it to the IdP package? The IdP component packaging owner saw no risk in adding a configuration option for Lafayette. Collaboration took place on the packaging front with Unicon and TIER to refine the package, incorporate Unicon’s work, and provide fixes for misconfigurations that were introduced. The result was a solution for copying over required files.

A Dockerfile contains the “recipe” for executing command line instructions that create an image. Multiple arguments allow the basic recipe to be customized. After we tested and verified the behavior of the new shib-cas-authenticator with the MFA Profile support, it was ready to be added during image creation. We added build references to the JAR file from Unicon and to our local configuration files. An additional step rebuilds the IdP WAR file to include these artifacts that provide the local configuration options that we know and love.

Many thanks to Misagh Moayyed and Paul Caskey for rising to this challenge.