You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

This is a work in progress.

In the TIER/midPoint_container GitHub project there are artifacts needed to build and deploy dockerized version of midPoint suitable to use within the TIER IdM environment.

This is the status of the work:

RequirementDescription (requirement + optionally solution comments in italics)State
1. Base Linux ImageAll TIER containers must be based on the current Centos image.  As of March, 2018, this is Centos 7.
  1. Source: standard maintained Centos 7 docker image
  2. (Under Discussion) potential use of Centos 7 image from Dockerhub that includes what is needed to use systemd as init (instead of supervisord).  We may enable this option if obtaining/implementing the logging changes we need to supervisord are hard - - https://hub.docker.com/r/centos/systemd/
  3. When build pipelines are published for production, they must include a yum update step.
in progress
2. Servlet EngineTomcat will be used whenever a servlet engine is needed.done
3. Java DistributionZulu should be used.waiting
4. Database
  1. If a relational database is provided within a container, MARIADB will be used.
  2. In general, database support is normally handled externally by the user or via a TIER-maintained MARIADB container.

Solution comments: The midPoint repository can be attached to the midPoint server in a flexible way. It can be either deployed in an (alternative) Docker container, or be provided externally either on premises or in the cloud.

done (except that we use a custom-built MariaDB image instead of TIER-maintained one)
5. Multi-Process ContainerSupervisord will be used whenever a container needs more than one process.waiting
6. TIER BeaconRun the TIER Beacon code on a regular interval as specified in the documentation.  Unless the component has its own scheduling mechanism for running external code, this requirement will usually result in the need to support cron and run supervisord in the container.waiting

7. Container Configuration

a) Standard Data

  1. Containers may receive configuration data via the environment as described below for Secret Data (7.b)
  2. Configuration data may be mounted into the container from external storage
  3. Configuration data may be "burned" into the container while it is being built.
  4. There are many trade-offs between ii and iii, some environments will choose to enable the end user to build their containers using either method.
ready for comments

7. Container Configuration

b) Secret Data

  1. The preferred mechanism to support data that must be protected (e.g., passwords, keys, etc.) is Swarm-mode Docker Secrets.
  2. Docker secrets are read-only to the application.
  3. Secret Processing - Docker Secrets are processed using one of the two mechanisms described below
    1. Secrets/Pointers-to-Secrets are passed in the environment using the syntax described below.  Either a single value may be supplied or, with the _FILE suffix, a file pointing to a docker swarm secret location 
      1. COMPONENT_DATABASE_PASSWORD=foobar
      2. COMPONENT_DATABASE_PASSWORD_FILE=/run/secrets/my_password_file
      3. Container startup scripts
        1. Start-up scripts process the environment and do whatever setup is needed to make secrets usable in the application.
        2. If the environment contains both a _FILE and name-only variables, the _FILE form is to be used.
      4. Documentation/comments for each attribute is required.
    2. A naming convention is developed for all application files that will exist in /run/secrets.  Scripting within the container appropriately processes these files, linking them to the application components as appropriate.  Documentation/comments re: the naming convention and files is required.
ready for comments
8. Container Orchestration
  1. Containers designed for compatibility/ease of use with Docker SWARM mode using Docker Stack Deploy and Compose files.
  2. Work to not preclude the use of other orchestration frameworks.
  3. Secrets are automatically mounted in /run/secrets by docker stack deploy using a compose file.
currently using docker-compose up command, compatibility with other orchestration frameworks is to be tested
9. Logging
  1. All logs from all elements within a container are written to stdout
  2. Goal: easily parsable records; future work is likely to include json formatted logs
  3. Lines (records) within each log file start with the following format
    1. Component Name (e.g., Shibboleth IdP, Grouper Loader, etc.)

    2. Native logfile name (e.g., Catalina.out, shibd.log, etc.)

    3. Environment (e.g., Prod, Dev, Test)

    4. A user supplied token via the environment

    5. The text of the logfile line, without modification.
  4. Records within a line are separated by the semi-colon character.  Semicolons are not permitted in the first four fields and must be removed if present.
  5. Spaces also need to be removed from the (c) Environment and (d) User Supplied fields of each record. If anyone remembers why we need ro remove these spaces, please comment here. 
  6. Example Records

    1. supervisord;console;testing;Build:1.2.3;2018-04-02 18:27:30,778 CRIT Set uid to user 0

    2. tomcat;catalina.out;testing;Build:1.2.3;2018-04-02 18:27:32,915 [main] INFO  org.apache.coyote.http11.Http11NioProtocol- Initializing ProtocolHandler ["https-jsse-nio-443"]

    3. Timestamps in logs must default to UTC.  Documentation should exist to assist users with changing this default to a local timezone.  The default of UTC instead of EST or PST seems logical given that many future campus deployments will include components deployed in multiple timezones for redundancy.
done
10. Shibboleth integrationUsers can be authenticated to midPoint using Shibboleth.in progress

Documentation

Logging feature

Logging is configured by setting the following environment variables: either from the command line or from docker-compose.yml (see commented-out examples in the provided file).

Environment variableMeaningDefault value
COMPONENTcomponent namemidpoint
LOGFILEnative log file namemidpoint.log
ENVenvironment (e.g. prod, dev, test)demo
USERTOKENarbitrary user-supplied tokencurrent midPoint version, e.g. 3.9-SNAPSHOT

According to the specification, semicolons in these fields are eliminated (replaced by underscores). The same is done for spaces in ENV and USERTOKEN.

Repository attachment feature

Repository configuration is done via the following environment variables.

Environment variableMeaningDefault value
REPO_DATABASE_TYPEType of the database. Supported values are mariadbmysqlpostgresqlsqlserveroracle. It is possible to use H2 as well but H2 is inappropriate for production use.mariadb
REPO_JDBC_URLURL of the database.

MariaDB: jdbc:mariadb://$REPO_HOST:$REPO_PORT/$REPO_DATABASE?characterEncoding=utf8

MySQL: jdbc:mysql://$REPO_HOST:$REPO_PORT/$REPO_DATABASE?characterEncoding=utf8

PostgreSQL: jdbc:postgresql://$REPO_HOST:$REPO_PORT/$REPO_DATABASE

SQL Server: jdbc:sqlserver://$REPO_HOST:$REPO_PORT;database=$REPO_DATABASE

Oracle: jdbc:oracle:thin:@$REPO_HOST:$REPO_PORT/xe

REPO_HOSTHost of the database. Used to construct the URL.midpoint-data
REPO_PORTPort of the database. Used to construct the URL.3306
REPO_DATABASESpecific database to connect to. Used to construct the URL.midpoint
REPO_USERUser under which the connection to the database is made.root
REPO_PASSWORD_FILEFile (e.g. holding a docker secret) that contains the password for the db connection./run/secrets/m_database_password.txt

Docker secrets

As of v3.9devel-578-gb20f43e (September 10th, 2018), each configuration parameter can be supplied either as a string value or as a file reference. This is to allow using Docker secrets to provide values for sensitive parameters.

Currently, there are two standard places when references to Docker secrets might be used:

Environment variableMeaningDefault value
REPO_PASSWORD_FILEFile that contains the password for the db connection./run/secrets/m_database_password.txt
KEYSTORE_PASSWORD_FILEFile that contains the password for the standard midPoint keystore./run/secrets/m_keystore_password.txt

This is how these file references are used:

If a configuration parameter name (e.g. midpoint.keystore.keystorePassword) is suffixed by _FILE, i.e. forming midpoint.keystore.keystorePassword_FILE, then the value of the configuration parameter is taken from the specified file. As with the regular parameters, the file pointer can be defined either in config.xml or in the command line, using -D Java option. So, for example, mapping of REPO_PASSWORD_FILE and KEYSTORE_PASSWORD_FILE environment variables is done by the following lines in midPoint Dockerfile:

ENV REPO_PASSWORD_FILE /run/secrets/m_database_password.txt
ENV KEYSTORE_PASSWORD_FILE /run/secrets/m_keystore_password.txt

(...)

# Execution
CMD java -Xmx$MEM -Xms2048M -Dfile.encoding=UTF8 \
(...)
       -Dmidpoint.repository.jdbcPassword_FILE=$REPO_PASSWORD_FILE \
       -Dmidpoint.keystore.keyStorePassword_FILE=$KEYSTORE_PASSWORD_FILE \
(...)
       -jar $MP_DIR/lib/midpoint.war

If needed, other sensitive information can be provided to midPoint in similar way. In particular, constants are expected to be used here, typical example being resource passwords.

Shibboleth integration

...


  • No labels