Child pages
  • Identity Provider Discussion Topics (IdPv3)
Skip to end of metadata
Go to start of metadata

Okay.  So, what now?

We typically interpret this question as "how do I get from here to production?" or "I inherited this system, so how can I make it not crash?"  The knowledge required for each is typically the same, largely depending instead on what production services you would like to offer.  This section explores important deployment concepts as discussion topics.  We will read through these together in class, but the material is here in detail for future reference as well.


1.  Federations and Federated Identity

A federation is a trust construct designed to reduce the trust "handshake" problem that would otherwise result in polynomial growth in trust management between providers in large collaborations.  There are typically ground rules and a metadata aggregate file that is the technical representation of the federation's approved membership.

Shibboleth as a software package is deliberately as agnostic as possible about federations.  A federation is not necessary to use Shibboleth.  Your IdP may be a member of zero or many federations, mostly by loading zero or many metadata sources.  Behavior is not well defined when the same SP entityID is registered with different authorities that your IdP trusts.  Interoperability with other providers will be maximized by asking for compliance with the SAML 2.0 Interoperable Specification and membership in a friendly neighborhood federation like InCommon.  Directly loading a provider's metadata is generally termed "bilateral federation" or "bilateral metadata exchange".

Configuration will often remain demarcated by the service you are interoperating with no matter what trust framework underpins the actual transactions.  Some specific configuration for the SP you will be working with is often intended, such as attribute release, metadata acquisition, and various protocol flags.  For "other" interoperation needs, such as disabling encryption, there's a relying-party.xml file that allows for extensive customization of behavior when communicating with specific providers.  Information about onboarding new SP's and detail about each of those options is available elsewhere in this document and the official Wiki.

Your IdP generated a metadata file that describes itself during installation that is located by default at /opt/shibboleth-idp/metadata/idp-metadata.xml and hosted at your IdP's entityID.  This file describes your IdP: its name, its endpoint locations, its keys, and so forth.  It should be maintained and updated as these change, and you will often need to supply the file or parcels of data embedded in that file to all relying parties that load your metadata.  A federation may facilitate this by making it a single step, but you may need to work directly with some SP's.  Try to ensure that all your partners are loading identical metadata to increase your ability to respond to quick change requests and general sanity.

InCommon maintains basic instructions for joining InCommon and for loading InCommon metadata using a Shibboleth IdP.

Most federations are built to serve a single vertical in a single country, a natural level of coordination given the overlap in legal requirements and regulatory and coordination authorities.  It's been difficult to scale this model up to verticals that don't naturally closely collaborate or across jurisdictions with wildly different privacy and security laws.  Work continues.

Different federations provide different types of services.  The one constant is generally one monolithic XML file listing every member of a federation signed by the authority.  Discovery services, dynamic lookup services, attribute filtration services, protocol translation services, direct support, and more are all offered by a smattering of federations with varying degrees of success.

Scaling the basic federation model down to build a smaller, tighter collaboration on top of an existing federation is more straightforward.  Metadata management and essential policy constraints can be handled by the broad federation, while more specific policy and technical resources like attribute release templates can be handled more locally.  Some local deployments choose to ask all SP's to work through a national federation, while others are completely self-contained.  The trade here is typically agility for workload, though it can get more interesting than that depending on the intensity and extensity of the use cases to be addressed.

Shibboleth typically treats certificates as wrappers for public keys in accordance with the default configuration, but it can be configured to do PKIX validation.  Simple reliance on server names is generally insufficient because encryption of a payload requires having an actual key for the recipient in hand.  It is strongly recommended that you use self-signed certificates with extremely long expiration periods for SAML transactions to avoid the need to deal with key rollover.  Some other software implementations and trust frameworks place more meaning and strict requirements on the X.509 bits themselves, so be aware of the validation requirements of your partners.

2.  Authorization: whodunit

Shibboleth has traditionally elected to perform authorization at the service provider.  This decision was made because the service provider is most likely to have the most information regarding any authorization decisions and associated failures, it prevents the service provider from needing to reveal its authorization policies, and authorization based on release of an attribute that says "yes, entitled", such as an eduPersonEntitlement, is a degenerate case that allows the IdP to perform the authorization logic, even as the SP retains responsibility for enforcement of the check, or a policy decision point(PDP) and a policy enforcement point(PEP) in parlance.  It is still strongly recommended that authorization be performed at the SP whenever possible .

Many other implementations treat basic authorization to use a service as implicit in the release of any assertion to that service, and some prominent applications will expect you to be able to do this.  The IdP contains configuration points to allow it to behave in this way documented in https://wiki.shibboleth.net/confluence/display/IDP30/ContextCheckInterceptConfiguration.  You can find an example in the distribution at /opt/shibboleth-idp/conf/intercept/context-check-intercept-config.xml.

3.  Data Sourcing & The Attribute Resolver


 

The attribute resolver has two primary components: DataConnectors that represent upstream data sources, and AttributeDefinitions that build internal representations of attributes from those data sources.

DataConnectors come in many different flavors, with LDAP being most popular.  The below is the DataConnector that ships with attribute-resolver-ldap.xml, populated with properties set in ldap.properties.

 

<resolver:DataConnector id="myLDAP" xsi:type="dc:LDAPDirectory"
    ldapURL="%{idp.attribute.resolver.LDAP.ldapURL}"
    baseDN="%{idp.attribute.resolver.LDAP.baseDN}"
    principal="%{idp.attribute.resolver.LDAP.bindDN}"
    principalCredential="%{idp.attribute.resolver.LDAP.bindDNCredential}"
    useStartTLS="%{idp.attribute.resolver.LDAP.useStartTLS:true}">
    <dc:FilterTemplate>
        <![CDATA[
            %{idp.attribute.resolver.LDAP.searchFilter}
        ]]>
    </dc:FilterTemplate>
    <dc:StartTLSTrustCredential id="LDAPtoIdPCredential" xsi:type="sec:X509ResourceBacked">
        <sec:Certificate>%{idp.attribute.resolver.LDAP.trustCertificates}</sec:Certificate>
    </dc:StartTLSTrustCredential>
</resolver:DataConnector>

 

Other DataConnectors can build static attributes, pull from other data sources, and more.  If you have a custom data store that you want to wire up to your SSO system, writing your own DataConnector is an intended extension point.

An AttributeDefinition has several pieces: an internal identifier, dependencies on DataConnectors or other AttributeDefinitions from which data is sourced, and one or more encoders that determine how the attribute is expressed when speaking different protocols.

 

<resolver:AttributeDefinition id="emailAddressInternal" xsi:type="ad:Simple" sourceAttributeID="mail">
    <resolver:Dependency ref="myLDAP" />
    <resolver:AttributeEncoder xsi:type="enc:SAML1String" name="urn:mace:dir:attribute-def:mail" encodeType="false" />
    <resolver:AttributeEncoder xsi:type="enc:SAML2String" name="urn:oid:0.9.2342.19200300.100.1.3" friendlyName="PerfectIdentifier" encodeType="false" />
</resolver:AttributeDefinition>

 

The above pulls source attribute data from myLDAP, the DataConnector for your LDAP directory, using the sourceAttributeID of mail.  Inside the IdP, the attribute will be known as emailAddressInternal, the ID you would refer to from an attribute filter in order to release this attribute.

There are two encoders: one for SAML1String and one for SAML2String, allowing the name of the attribute on the wire to differ.  By convention, urn:oid names are used for X.500 attributes, but we recommend defining all new attribute names as resolvable URL's.  The name is the unique name for the attribute in the protocol named by the encoder type, allowing the attribute to have arbitrarily different representations in different protocols.  SAML 2.0 also includes a non-normative friendlyName field that guides recipients of information that haven't memorized the OID registry.

The encoders in distributed attribute-resolver files have an explicit encodeType="false" added to every attribute encoder.  This will make the attribute value on the wire not include an explicit xsi:type.  Older versions of the identity provider, newer versions of the identity provider that use ported configuration files, and all other attribute encoders that do not have encodeType="false" will include an xsi:type.  While this distinction is pretty philosophical from our perspective, you may encounter deeply philosophical services that you want to interoperate with.  You may need to define special AttributeDefinition clones for this purpose that include explicit typing.  Here is an example of the same string data on the wire strongly typed as a xs:string, or not typed, which defaults to xs:anyType.

<saml2:AttributeValue

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

xsi:type="xs:string">

myself

</saml2:AttributeValue>

not typed -->

 

<-- typed

<saml2:AttributeValue>

myself

</saml2:AttributeValue>

Unlike most defaults in the IdP, encodeType defaults differently in configuration and in the system itself.  If encodeType is not explicitly set, it will default to "true".  This was to accommodate legacy configuration files, which didn't have the setting at all and defaulted to "true" in prior releases, while aiming for a more svelte assertion in default deployment of IdPv3.  This encoder will specifically encode the string as a string type on the wire.

 

<resolver:AttributeDefinition id="typedEmailAddressInternal" xsi:type="ad:Simple" sourceAttributeID="mail">
    <resolver:Dependency ref="myLDAP" />
    <resolver:AttributeEncoder xsi:type="enc:SAML2String" name="urn:oid:0.9.2342.19200300.100.1.3" friendlyName="AnythingYouPlease" encodeType="true" />
</resolver:AttributeDefinition>

 

Special attributes have been designed for federated identity specific use cases, such as the persistentId, an identifier that is intended to be opaque and persistent while being unique to the trio of an identity provider, a service provider, and a user.  These attributes can depend on special DataConnectors that can perform the necessary cryptographic hashing.  There is vigorous disagreement about whether it's useful (or even necessary) to store computed values.

Avoid the temptation to push too much identity management logic into the attribute resolver.  Performing matches, merges, and so forth upstream in dedicated identity management systems is usually preferable to ensure that systems of record get updated, applications that don't consume SAML will receive consistent user data, and that maximum data is available to the engine that is performing identity reconciliations.

Shibboleth has defined an extension to SAML metadata to allow a metadata source to designate an IdP as authoritative for specific domains, known as "Scope".  This check is enabled by default in the Shibboleth SP.  An ndk@internet2.edu eduPersonPrincipalName asserted by Internet2's IdP will be accepted, but ndk@osu.edu would be discarded.  Most other implementations don't leverage scope checks.  The scope is typically expressed on the wire using @domain as a delimiter rather than structured XML, and it may be present in upstream data sources or added to attributes dynamically by the attribute resolver.

It may be necessary in some use cases to do reverse lookups based on inbound user identifiers.  This should be avoided if possible, but if not, cd warily into the conf/c14n/ directory.

4.  Attribute Release

Attribute filter policies are given an id to help identify the rules that are causing or preventing the release of information.  Each filter policy is composed of two pieces: the PolicyRequirementRule, and AttributeRules.  The entire set of attribute filters that may be involved in any given transaction are evaluated together in the following order:

  • Implicit Deny
  • Explicit Permit
  • Explicit Deny

The following example will release any value of the emailAddressInternal attribute to any SP matching the policy requirements, in this case a literal string match against the entityID of the recipient.

 

<afp:AttributeFilterPolicy id="releaseInternalMailtoCreativeSP">
 
 
    <afp:PolicyRequirementRule xsi:type="basic:AttributeRequesterString" value="https://creative.sp.example.org" />
 
    <afp:AttributeRule attributeID="emailAddressInternal">
        <afp:PermitValueRule xsi:type="basic:ANY" />
    </afp:AttributeRule>
 
 
</afp:AttributeFilterPolicy>

 

The PolicyRequirementRule determines when the attribute filter is in effect.  The PolicyRequirementRule can be a boolean, which allows you to express arbitrarily complex policies.  The most common matches are performed against the service that will receive the attributes, AttributeRequesterString, or against attributes that the user has at the IdP(e.g. FERPA suppression).

Some of these rules will only be useful for specific protocols.  For example, the CAS protocol does not involve entityID's at all.

This is followed by one or more AttributeRule elements, each of which affects the release of a given attribute, as identified by an attributeID that refers to an ID from the attribute resolver.  The specific behavior depends on the contents of the AttributeRule.  PermitValueRules of type basic:ANY are most widely used: release any values for that attribute name that may be present.  Value matches can also be made arbitrarily complex.

"Consent" refers generally to asking users for express permission to release data on their behalf, and may be variously prefaced by "user", "informed", "attribute", and other terms with subtly different meaning.

Whether consent is the solution, part of the solution, or part of the problem for attribute release is hotly debated today.  This is due to legal differences, policy differences, cultural differences, lingering challenges around presenting meaningful guidance to users on attribute semantics and usage, especially when dealing with opaque data from large sets, such as group membership, and a lack of good options to grant service to a user that refuses consent.

Consent is enabled by default in IdPv3 with a default implementation that stores consent records in encrypted cookies, with all the implicit pros and cons of client-side state.  It's possible to store consent results in a database as well, more consistent with the behavior of uApprove in IdPv2.

5.  Authentication, Multi-Factor Authentication (MFA) and External Authentication

In the above curriculum, we configured the IdP to authenticate to a single LDAP service.  The IdP is capable of additionally authenticating against multiple authentication sources.  Additionally, if you combine many different authentication methods (say, for example, Password and Duo), you can tell the IdP which authentication method you consider to be equal to or better than another.  This allows you to (via the attribute-resolver) require some users to, for example, always use MFA.  The configuration for this is controlled in both the idp.properties file as well as the authn-comparison.xml file.

Opinions vary wildly about which authentications are "better" in a Platonic ideal sense and SAML contains facilities for requesting this, so match your recipient's expectations to yours.  Similar conversations about the one true meaning of two factors are not expected to conclude soon.  MFA systems that rely on both factors being passed in the same transaction, such as tokens that the IdP can validate, may be supported out of the box via the JAAS login flow.  MFA systems that combine multiple separate factors or communicate with external systems are intended to be supported by writing additional sub-flows.

The authentication engine in IdPv3 is intended to roughly replicate the behavior of the Multi Context Broker, an extension that was highly popular with IdPv2.  Like the MCB, authentication methods are implemented in distinct modules (Spring Web Flows) and are then wired together.  Unlike IdPv2 with the MCB, IdPv3 can interpret service requests such as, "Give me the highest level of authentication you can." The IdP will then return to the SP the highest level of authentication that the user was able to accomplish.  At this time there is very little documentation, examples, or contributed implementations of additional factors.  

6.  Session Management and Logout

By default, the IdP sets a 30 minute duration for the IdP session lifetime(the duration for which the session is valid following first authentication) and authentication lifetime(subtly different from the IdP session lifetime) and authentication timeout(subtly different from subtle differences). Many institutions would prefer a longer window, typically 8 hours.  This may be configured in idp.properties by changing idp.session.timeout, idp.authn.defaultLifetime, and idp.authn.defaultTimeout values.

  • Client-side
    • Client-side session management with multiple servers implies brief session affinity(stickiness), long enough to complete a login transaction between a single user agent and a single server node and set an SSO cookie, generally achieved through use of a smart load balancer setting its own cookies or other mechanisms.  After a successful login, further requests need not hit the same node.  The SSO cookie's contents are protected by symmetric encryption.  This means you need to ensure the same key is on all IdP nodes and furthermore It is strongly recommended that the key be rotated frequently.
    • Some features will not work in deployment strategies that don't include replication between nodes, such as client-side state storage.
  • Server-side
    • Magical hypervisor: Virtualized hardware that can scale large enough such that a single IdP can be responsive, and handle upgrades by bringing up a second VM and eventually changing DNS records to point to it. This relies on a hypervisor that is able to gracefully manage failures in underlying hardware, which some virtualized hardware providers don't provide.
    • DBMS, typically via Hibernate
    • memcached
  • External
    • If your authentication mechanism is purely external, a reasonable goal is to make that external mechanism responsible for session management as well, since SSO is a common requirement for these systems.  Please beware session recycling and caching mechanisms in the IdP that may short-circuit your session management chokepoints, particularly ensuring idp.sessions.enabled = false in idp.properties.

The option that is best in your environment is usually just the technology you are most comfortable with. There may be limitations imposed on your deployment depending on the choice you make here. These limitations will not impact most deployments.

https://wiki.shibboleth.net/confluence/display/IDP30/Clustering

https://wiki.shibboleth.net/confluence/display/IDP30/SecretKeyManagement

The IdP supports logout via a recently defined specification as part of SAML 2.0 and a simple redirect, both of which give the IdP the ability to clear a user's session through front channel communication. The user's session is cleared not only by deleting the cookie(shib_idp_session) from the user's browser, but also by destroying the session internally.  Providers that support logout through front-channel SAML or CAS can be wired up to the logout interface.

https://wiki.shibboleth.net/confluence/display/IDP30/LogoutConfiguration

Logout endpoints can be registered in IdP metadata(e.g. /idp/profile/SAML2/SLO/Redirect), or you can frequently redirect users to a front-channel logout mechanism(/idp/profile/Logout).  Check with the SP before doing this, or better yet, give them a configuration point.

7.  Someone Bookmarked the Login Page

Shibboleth was not designed to accommodate any use cases where the login page itself is bookmarked.  If a user does it anyway, they're likely to do so with either no service destination implied by the bookmark or a single service accidentally built into the bookmark.  This is the most common font of flow execution errors in the IdP's logs, though there are many other potential causes.

This will typically just result in an error after authentication, and the user will need to start from the resource itself.  If you would like to provide a better experience, you might consider construction of a basic default landing page that includes unsolicited SSO links for some of the most popular services.  It could even be weighted by the services that user accesses or other metrics.  Few identity providers do this today, but many are looking at it as a key future use case.

8.  The Back Channel Koan

One of the bigger dilemmas facing every IdP deployer is whether or not to support back-channel queries.  Early editions of Shibboleth relied on them, as do some federated identity protocols.  However, enabling back-channel queries of any form makes clustering much harder because you can't rely on the client-side state or the client's session affinity to bind requests to the right node, forcing session replication or clever encoding of some form.  You may hear this as "the 8443 problem" in reference to the commonly used default port and vivid memories of interactions with network administrators, but apart from fun with firewalls, all the same considerations exist regardless of the port used.

Consider deeply the ramifications of any option before selecting one.  Most SAML-only deployments choose not to support back channel queries.  Some other protocols require back channel queries no matter what.  It's much easier to go back and add service later than it is to remove endpoints that you had declared.  In all regards, ensure that your IdP's metadata is an accurate depiction of how the IdP is set up.

TLS offers good protection for data in transit used conventionally for every leg of a SAML transaction, but it offers no protection for data at rest.  Standard flows expose the login token to the user agent(web browser) and any malware they may have collected.  As a result, most integrations use XML encryption or back-channel queries as well.

Here is an example port table for a production IdP.  These are the logical mappings for the world-facing interfaces, demonstrated with a single identity provider.  Clustered solutions will need to take their side of the intermediate reverse proxy into account when designing these and further rules.  You only need the 8443 rule if you are permitting back-channel queries.

 

any:any* -> IdP.IP.Add.ress:443 TCP (End user browser to IdP)
any:any* -> IdP.IP.Add.ress:8443 TCP (External SP to IdP direct query interface)

IdP.IP.Add.ress:any -> 205.75.165.125:80 TCP (IdP to InCommon metadata)+
IdP.IP.Add.ress:any -> 140.182.44.53:80 TCP (IdP to InCommon metadata)+
* - can be reduced appropriately to meet "security" requirements, but the idea is that the IdP should be accessible by users anywhere.
+ - InCommon metadata servers; see https://spaces.at.internet2.edu/display/InCFederation/Metadata+Server

In most situations, there is no information passed in the assertion that needs to be concealed from the user, and some integrations don't use encryption.  For highly sensitive information, consider that it's not possible to create Mission Impossible assertions that self-destruct.  The captured payload could be used for brute force decryption attempts indefinitely.

This is not a practical concern in most environments relative to the ease of deployment offered by reliance on the front channel.  Hopefully, it's not a concern in your world.  If it is, consider:

  • the cumulative sensitivity of the data aggregated,
  • the expected duration of that sensitivity,
    • compared to relative expectations of encryption suite implementation strength and cracking technique evolution, and
  • the level of trust you can place in entities to whom the tokens are exposed, such as users' browsers.
9.  Flow Customization

Most of the end-to-end transactions that the IdP is capable of are wired together using Spring Web Flow.  The bulk of this wiring is in the system/flows directory, indicating that the developers don't intend that to be touched by deployers.  This is typically for security, specification compliance, or even purely functional reasons.

There is a counterpart flows/ directory in the top level as well, and that is intended to be tunable by deployers.

10.  Monitoring, Logging, and Auditing
  • You can write a script to parse your IdP logs for useful reports (some may require DEBUG logging). Here are some useful search strings for various reports:
    • successful authentication events:
      • "Login by '<username>' succeeded"
    • failed authentication events:
      • "Login by '<username>' failed" (on same line)
    • assertions issued:
      • "[Shibboleth-Audit:"
    • possible security events (this is only a partial list):
      • "No metadata for relying party"
      • "Replay detected of message"
      • "Error decoding artifact resolve message"
  • Issue:
    • It's important to know when your IdP is not operating properly. There is a status URL that helps to achieve this.
  • How to configure:
    • Have your monitoring system check the URL https://idp.example.org/idp/status and look for one of the words you'll find on that page such as "idp_version".
    • GOTCHA ALERT: This status handler is restricted by IP address. This may be changed by modifying /opt/shibboleth-idp/conf/access-control.xml and then re-starting the IdP or waiting 15 minutes.
    • You may set up one or more reporting tools.  The IdP's /idp/status pages can be restricted to specific IP addresses, such as those of monitoring servers, and HTTP status codes will give you a basic indication as to whether the IdP is alive.  This does not necessarily imply the IdP is functional, but it can serve as early and quick notice.
    • Often, simply knowing that the IdP is operational is insufficient information. It's much better to successfully validate that an entire federated transaction, end to end, is functioning properly. There are many scripts available that can traverse the entire transaction and report if something fails, including http://staff.washington.edu/fox/webisoget/ and the ECP client available at https://wiki.shibboleth.net/confluence/display/SHIB2/Contributions#Contributions-Other%2CRelated%2CContributions.
  • Notes:
    • 4 log files (in the logs folder of your IdP installation directory):
      • idp-process.log - the main log file and the place to look when troubleshooting
      • idp-warn.log - contains log messages from the process log at WARN or ERROR levels
      • idp-audit.log - A parseable transaction log
      • idp-consent-audit.log - A parseable transaction log specifically of users consenting to have their attributes released to specific services
    • Logback.xml is read every 5 minutes, so no restart is required if you are trying to troubleshoot a problem.
    • Generally, leave the log levels at the default values of WARN or INFO. Set to DEBUG for a potentially massive amount of detail.
    • It is also possible to configure the IdP's logging system to send an email any time it sees a logged message at the ERROR level. Beware - while nice in concept, this can produce lots of spurious emails.
  • How to configure:
    • Configured in (IDP_HOME)/conf/logback.xml
  • Resources:
11.  Dynamic Configuration Reloading

The IdP is capable of dynamically reloading almost any configuration file, it does not do so out of the box to protect deployers from inadvertent changes taking place.  However, It can become rather irritating to have to experience a brief service outage (from a service restart in a single-server environment, for example) just to add an attribute release rule to your configuration.  You can configure the IdP to automatically reload many configuration files on a periodic basis.  The most commonly enabled one is attribute-filter.xml, addressing the mentioned example.

You can also force the IdP to reload most configuration files by using a shell script located at $IDP_HOME/bin/reload-service.sh.  Metadata is typically polled automatically in all deployments, but reload of metadata can also be forced using $IDP_HOME/bin/reload-metadata.sh.

The major caveat to this is .properties files.  Spring is used to inject property values from .properties files into the right places in the underlying XML-based configuration.  This effectively means .properties can't be changed and reloaded without a restart of the servlet even if the component itself, such as the attribute resolver, could be.

CAUTION: If your edits result in an unparseable configuration, the IdP is designed to continue working with the last good configuration.  This can become confusing if you are several edits ahead of the breakage and are wondering why the IdP won't start, and some of the initialization failure messages are logged at intermediate points in the process.

Ensure that any changes you make have the intended effect before you place them into production regardless of how configuration is reloaded.

12.  Credential and Configuration Management

Out of an abundance of caution and good security hygiene, the default configuration of the IdP segregates keys and certificates that are used for different purposes whenever possible.  It's more common in deployment to use one key and certificate for user-facing interactions to accommodate web browser root authority requirements and a separate key and certificate for server-to-server interactions to avoid frequent credential rotation needs or arbitrary constraints.

It is recommended to use a version control system like git, svn, or cvs to manage as many Shibboleth related files as possible, preferably in tandem with a configuration management system like Chef, Puppet, Salt.  If you can, aim for the conf/, credentials/, /flows, /jetty-base, /messages, /metadata, /views, and /edit-webapp directories, although some deployments may not want to back up some of these directories if they are not being customized.

Exactly how to manage the build and deployment process is heavily environment specific, which is why it’s so hard to make blanket recommendations.  If you have little experience managing Shibboleth in production, consider starting by looking at scripts that others have written even if you don’t intend to use them.

13.  Upgrade Management

Don't be afraid to upgrade. Be afraid not to upgrade. But upgrade wisely.

  • Java
    • Install new java alongside old java
    • When ready to switch, change the JAVA_HOME environment variable to point to the new location.
    • Restart Shibboleth
  • Jetty
    • Install Jetty to new location
    • Adjust your startup script to call the start.jar from the new Jetty.
    • Start the new version
  • Shibboleth IdP
    • Check the Shibboleth wiki for specific upgrade instructions related to the new version
    • Unzip package
    • Run install.sh
    • Restart Shibboleth
    • Check status URL
14.  Tuning and Securing

Most IdP deployments running on reasonable hardware experiencing less than 100,000 logins per day or 50 logins per second will typically not need to worry about tuning.  For larger deployments, Jetty offers direct documentation for most optimization that is possible for the container, JVM, and OS.  There isn't much you can do to influence the speed of the IdP itself, since most of the operational overhead comes from XML signing, a critical piece of the security model that must occur in every successful transaction.

The IdP ships with defaults that encourage good security hygiene.  Use file permissions extensively and avoid exposing connectors and webapps to the world, especially potentially sensitive ones such as AJP.

https://wiki.shibboleth.net/confluence/display/IDP30/SystemRequirements
https://wiki.shibboleth.net/confluence/display/IDP30/Jetty93
https://wiki.shibboleth.net/confluence/display/IDP30/Load+Testing+Contributed+Results

 

 

14.5. Please, Have Mercy

  • No
15.  Support Resources

Troubleshooting

  • Check (IDP_HOME)/logs/idp-process.log
  • Read thoroughly any WARN or ERROR messages, and understand that they may require interpretation to determine the root cause.  Wading upstream through Java stacktraces is often productive.  The Wiki below contains an article on common errors.
  • Selectively raise log levels for classes that are misbehaving.  For example, if an attribute that you expect to be released is not being released, it's often helpful to turn logging for both the IdP and the org.ldaptive libraries to DEBUG.  This will let you see whether the attribute was present in the LDAP query response, given a definition, released by a filter, and given an encoder.

Shibboleth Wiki

Mailing lists (the best Users mailing list anywhere, for sure!)

Commercial support

  • Several options: http://shibboleth.net/community/consultants.html
  • You get on the list by asking nicely; you get put at the top of the list by paying a fee.
  • Being on the list implies no endorsement of any kind by either the Shibboleth Consortium nor InCommon Federation
  • No labels