$MIDPOINT_HOME/var/config.xml
in an editor. Add a
<constants>
<resourceGTESTdomain>your.googledomain.edu</resourceGTESTdomain>
...
</constants>
<configuration>
<midpoint>
<webApplication>
<importFolder>${midpoint.home}/import</importFolder>
</webApplication>
<repository>
<repositoryServiceFactoryClass>com.evolveum.midpoint.repo.sql.SqlRepositoryFactory</repositoryServiceFactoryClass>
<database>mariadb</database>
<jdbcUsername>redacted</jdbcUsername>
<jdbcPassword>redacted</jdbcPassword>
<jdbcUrl>jdbc:mariadb://localhost:3306/redacted?characterEncoding=utf8</jdbcUrl>
</repository>
<constants>
<resourceGTESTdomain>your.googledomain.edu</resourceGTESTdomain>
<resourceGTESTclientid>changeme</resourceGTESTclientid>
<resourceGTESTclientsecret>changeme</resourceGTESTclientsecret>
<resourceGTESTrefreshtoken>changeme</resourceGTESTrefreshtoken>
<resourceFOOBARhost>foobar.someplace.edu</resourceFOOBARhost>
<resourceFOOBARport>8080</resourceFOOBARport>
</constants>
...
<expression><const>CONSTANT_VALUE_NAME</const></expression>
<configurationProperties xmlns:gen379="http://midpoint.evolveum.com/xml/ns/public/connector/icf-1/bundle/com.evolveum.polygon.connector-googleapps/com.evolveum.polygon.connector.googleapps.GoogleAppsConnector">
<domain><expression><const>resourceGTESTdomain</const></expression></domain>
<clientId><expression><const>resourceGTESTclientid</const></expression></clientId>
<clientSecret><expression><const>resourceGTESTclientsecret</const></expression></clientSecret>
<refreshToken><expression><const>resourceGTESTrefreshtoken</const></expression></refreshToken>
</configurationProperties>
After modifying the resource configuration to use the constant values, you can test the change by viewing the resource in the midPoint UI and clicking the Test Connection button.
Stacy Brock, Oregon State University
Purpose: To help someone with no experience with midPoint be able to setup and run midPoint. Provide basic configuration to pull in users from a data source and sync that data to external target system such as LDAP.
Pull new Docker Image from Evolveum:
ON Linux VM:
add user to docker group (dont run as sudo)
docker run -d -p 8080:8080 --name midpoint evolveum/midpoint:latest |
Start and Stop container
Once you have your container created use start and stop commands for starting and stopping it.
- Start not running container:
docker start midpoint |
- Stop running container:
docker stop midpoint |
Stop command will save your configuration until you remove midPoint container.
To Stop/Start only Tomcat: enter the midPoint container bash use:
docker exec -it midpoint bash |
In Docker container fix midpoint.sh - change: #!/bin/bash to #!/bin/sh
Should be running here: http://<VMname>:8080/midpoint/
Login as Administrator with default password and changeit.
Create Incoming sync from Oracle DB
Copy Oracle Driver to VM.
/opt/midpoint/var/lib
Go to: Resources -> New Resource
Resource Basics Tab:
Add Resource Name
Connector: ConnId org.identityconnectors.databasetable.DatabaseTableConnector v1.4.2 0
Next:
Configuration Tab:
Host: <DB Server>
TCP Port: <DB Port>
User: <DB UserName>
User Password: <DB Pwd>
Database: <Oracle DB Name>
Table: <IdM Table Name>
Key Column: <Table Primary Key>
JDBC Driver: oracle.jdbc.driver.OracleDriver
Change Log Column: <for us> OPERATIONTIMESTAMP
Schema Tab:
Should automatically bring in AccountObjectClass with all DB columns. You can remove columns if you need to, but for us we accepted them all since it’s just a view of columns we need in the DB anyway.
Next:
Schema handling Tab:
Click Add Object type to add mappings from DB to midPoint.
Kind: Account
Intent: default
Display name: Default Account
Make sure Default is selected.
Object class: AccountObjectClass
Add Attributes (Click plus + sign):
Select DB Attribute from drop down.
ri: NETID
ri: FIRSTNAME
ri: LASTNAME
Add Inbound mappings (Click plus + sign):
Select Authoritative.
Target: $user/name
Target: $user/givenName
Target: $user/familyName
Next:
Synchronization Tab:
Click Add synchronization object to add Actions for syncs.
Name: Default Account
Kind: Account
Intent: default
Select Enabled.
Add Correlation (Click plus + sign):
Filter clause:
<q:equal xmlns:org="http://midpoint.evolveum.com/xml/ns/public/common/org-3">
<q:path>name</q:path>
<expression>
<path>$account/attributes/ri:ldapid</path>
</expression>
</q:equal>
Add Reactions (Click plus + sign):
Choose Situation: Linked
Synchronize: True
Choose Situation: Deleted
Synchronize: True
Action: unlink
Choose Situation: Unlinked
Synchronize: True
Action: link
Choose Situation: Unmatched
Synchronize: True
Action: Add focus
Re-Select Enabled if it disappeared.
Next.
Capabilities Tab:
Finish.
Create the Import Sync for the Resource
Go to the Resource Details Page
Click on Accounts Tab:
Click the Import Button bottom left -> Create New
TaskName: IdMImportSync
Type: Importing accounts
Kind: Account
Intent: default
Object class: AccountObjectClass
Check Recurring task
Schedule interval (seconds): 300
Save.
Should now have users in midPoint
When it runs after 5 mins.
Go to Users -> List users
Users from IdM DB should be listed.
Create Export sync to LDAP
For us, it is Oracle DS
Go to: Resources -> New Resource
Resource Basics Tab:
Add Resource Name
Connector: ConnId com.evolveum.polygon.connector.ldap.LdapConnector v1.5.1
Next:
Configuration Tab:
Host: <LDAP Server>
TCP Port: < LDAP Port>
Bind DN: < LDAP BindDN>
Bind Password: <LDAP Pwd>
Connect timeout: 300000
Maximum number of attempts: 5
Base context: <LDAP base context>
Paging strategy: auto
Paging block size: 1000
VLV sort attribute: uid
Primary identifier attribute: uid
Schema Tab:
This will bring in all ObjectClasses from LDAP server automatically.
You have to edit XML to remove objectClasses that are not needed. I just downloaded to eclipse to modify then re-uploaded it.
Next:
Schema handling Tab:
Click Add Object type to add mappings from midPoint to LDAP.
Kind: Account
Intent: default
Display name: Default Account
Make sure Default is selected.
Object class: inetOrgPerson (for us)
Add Attributes (Click plus + sign):
Select LDAP Attribute from drop down.
ri: dn
ri: uid
ri: givenName
ri: cn
ri: sn
Add Outbound mappings (Click plus + sign):
Select Authoritative.
Strength Strong
Source: $user/name
Expression type: Script
Language: Groovy
Expression:
<script xmlns:org="http://midpoint.evolveum.com/xml/ns/public/common/org-3">
<code>
'uid=' + name + ',ou=People,dc=<campus>,dc=edu'
</code>
</script>
Source: $user/name
Source: $user/givenName
Source: $user/fullName
Source: $user/familyName
Make sure Default is Still selected.
Next:
Synchronization Tab:
Click Add synchronization object to add Actions for syncs.
Name: Default Account
Kind: Account
Intent: default
Select Enabled.
Add Correlation (Click plus + sign):
Filter clause:
<q:equal xmlns:org="http://midpoint.evolveum.com/xml/ns/public/common/org-3">
<q:path>c:name</q:path>
<expression>
<path>declare namespace ri="http://midpoint.evolveum.com/xml/ns/public/resource/instance-3";
$account/attributes/ri:uid
</path>
</expression>
</q:equal>
Add Reactions (Click plus + sign):
Choose Situation: Linked
Synchronize: True
Choose Situation: Deleted
Synchronize: True
Action: unlink
Choose Situation: Unlinked
Synchronize: True
Action: link
Re-Select Enabled if it disappeared.
Next:
Capabilities Tab:
Finish.
Create LiveSync for the Resource
Go to the Resource Details Page
Click on Accounts Tab:
Click the Live Sync Button bottom left -> Create New
TaskName: LdapExportSync
Type: Live synchronization
Resource reference: <Resource Name>
Kind: Account
Intent: default
Object class: inetOrgPerson
Select: Recurring task
Schedule interval (seconds): 300
IdM users should be synced: Oracle -> midPoint -> LDAP
It took 2-3 days to initially import 100k users from our test LDAP, so we will work on performance tuning next.
In our previous blog post (see Chop Down the Beanstalk, Open Up the Fargate), we examined our path to using AWS Fargate as our container deployment host. I'm pleased to say, Illinois has achieved success in deploying the TIER Docker images in AWS Fargate, in particular the Grouper and Shibboleth images, so far in a testing environment, but with hopes of moving to production as soon as July.
Many of you may have seen our demo at the Internet2 Global Summit TIER Showcase, showing our prototype Grouper installation. That was the initial milestone of success, but we have continued to build on that by fine-tuning the deployment, adding the back-end database using AWS RDS, and adding an endpoint to the Application Load Balancer (ALB). In addition, we have repurposed the same code and CI/CD methods to deploy our first Shibboleth test instance in AWS. Here's a quick overview of the components that helped us achieve successful deployment.
Our software development team are more than just developers, they have been the pioneers of our DevOps process of continuous integration and continuous delivery (CI/CD) using AWS, in conjunction with tools such as Github, Terraform and Drone. Here's a look at a simplified version of our process, as shown on a slide during the TIER Showcase demo:
Github (Version Control)
Github has become our repository for all AWS code. Github has been in use by our software developers for some time, but in the DevOps world, with infrastructure as code, our service admins have no become Github Repo contributors. In our design, a particular service is derived using two repos: One containing the re-usable central Terraform modules that our software development team built, and then a separate repo that contains our Docker configurations and software configs. As a new service is launched in AWS, a new branch of the central service terraform module is created with a new folder in the region of deployment (i.e., us-east-2/services/<servicename>
) containing small number of files for the specific Terraform declarations needed by the service infrastructure, with a reference back to the main modules, leveraging Terraform's modular capabilities to re-use code.
The Docker build is stored in the other repo and contains our sets of Dockerfiles, along with the configs and secrets that are to be added to the images. Although the repos go hand-in-hand to deploy a service, it is important to observe the distinction: the Terraform codes builds the AWS infrastructure that defines the service (the virtual hardware), while the Docker code builds the images that are pushed to the Amazon Elastic Container Registry (ECR) (the operating system and software). That is, once the infrastructure is built, if there are no changes to the networking or container definitions, the service images themselves can be quickly updated using the ECR and the containers restarted, without redeploying infrastructure.
Terraform (Infrastructure)
The Terraform code is executed using a useful wrapper known as TerraGrunt by GruntWorks, that preserves the Terraform state in an AWS S3 bucket automatically. Once we have our Github repo of our service branch of the Terraform modules, we can first preview the components being built using terragrunt plan
, and check for any errors. Once this looks good, we simply issue a terragrunt apply
to execute the code. Typically there will be a dozen or so components "constructed" in AWS, including load balancer endpoints, clusters, networks, service names and tasks.
Docker (Images)
As mentioned, the service configuration of Grouper is based on the published TIER Grouper Docker images. Shibboleth follows the same path using TIER Shibboleth images. Custom Dockerfiles were built using a two-layer model of a customized "base" image, and then further customizations for each Grouper role being deployed. More on that in "The Finished Product" section below.
Drone (Pipelining)
As of this writing, we have not yet implemented the Drone components, but the main purpose of Drone is to "watch" for a new commit/push of the infrastructure configuration in Github, and instigate a build/rebuild of the infrastructure in a staging area of AWS using a docker push. We will update you more on Drone configuration in a future blog post.
In Drone's place, we have basically scripted a similar behavior that logs in to aws, grabs the docker login command, builds the Docker images locally, tags them and pushes them up into the Amazon ECR. With the infrastructure already built, it's simply a matter of STOPping the running container, so that it restarts with the newest image, tagged "latest".
Behave (QA Testing)
Like Drone, we still have work to do, but have chosen Behave for our QA testing suite. Once this process is refined, we will be sure to describe this in a follow-up post.
The Finished Product
Using the TIER Demo packaging as a starting point, we defined our base infrastructure for Grouper to have three active containers: a UI, a Webservice, and a Daemon node. This was basically accomplished with three separate Terraform configurations and three different task.json files, to allow unique customizations of network port exposure and memory sizes needed by each Grouper role. As mentioned before, this was stored in a branch of our central modules code.
Following the same three-node model, the Docker configuration followed in a similar way. First we built a customized "Grouper Base" image, which derived from the original TIER image (FROM tier/grouper:latest
), but added our custom configs pointing to the RDS instance of the grouper database, configs for connecting to the campus Active Directory (which happen to have domain controllers as EC2 instances in AWS), SSL certificates, credentials and secrets, etc. that were common to all three nodes. Then each node had its own individual Dockerfile that derived from our Grouper base image, to add additional specifics unique to that image. Once all the images were built, they were tagged using the AWS ECR repo tag and Docker Push'd up to the ECR.
Once the images were uploaded to the ECR, we were able to run the Terraform (using Terragrunt) to launch the service. Within minutes, we had running containers answering on the ports defined in the task.json file, and we could bring up the Grouper UI.
More to Do
Still more integration needs to be done. Besides the Drone and Behave processes, we have to conquer a few more things.
- Converting our RDS instance as a separate code. Generally, with a persistent database, you don't want to continuously burn down and redeploy, so we have to treat this special, but we do want it in code so that the configuration is self-documenting and reproducable, for example to bring up a cloned instance for testing. For now we just "hand-crafted" the RDS instance in the AWS console.
- Tackling DNS One issue with cloud-hosted solutions is the DNS nameservers. While we have our own authoritative nameservers for all things illinois.edu, we must have our DNS configuration for AWS in their Route 53 DNS resolvers. This requires us to delegate a particular zone, so that we can associate our ALB endpoints with static, resolvable hostnames in our namespace. In turn, we can define friendly service CNAME records in the campus DNS to point to the Route 53 records.
- Shift to blue/green deployment methods We have to shift our thinking on what a "test" environment really is, and move toward the blue/green concepts of DevOps. This changes the way we think of changes to a service and how it is deployed, but the CI/CD model.
- Autoscaling Nodes We one day hope to configure AWS to just add more UI nodes or more Daemon nodes or more WS nodes if/when the load gets to a certain level that we need to spread the work. Lots of testing and evaluating how that behaves in the Grouper architecture.
- Grouper Shell Node We have settled on the idea of a separate and longer-living EC2 instance that contains Grouper and SSH to allow us remote access and to execute GSH scripts against the AWS instance of Grouper. It would be interesting to experiment with the idea of converting some of the oft-run GSH scripts into individual launched Fargate instances that just run and die on their own, perhaps on a schedule.
We plan to demo much of this at the June 19th TIER Campus Success meeting. Bring your questions and feedback!