Middlebury

CAS Administration

Documentation about administering our CAS server infrastructure. More information and documentation about CAS can be found on the Apero site, CAS code, and additional CAS documentation.

Application Deployment

Setting up a development environment

Accessing the source code

Our CAS source-code is maintained as a "Maven overlay" that includes just our customized files. All other (non-customized) files are automatically downloaded as part of the Maven build process.

To get our CAS source code, clone from our central Git repository on chisel. (if you don't have access, send Adam Franco your ssh public key.)

For CAS 3.x:

git clone git@git.middlebury.edu:web/midd-cas.git

For CAS 4.x:

git clone git@github.com:middlebury/cas4.git midd-cas

Once you have cloned the Git repository, you should have a directory called midd-cas.

This directory contains the following files:

  • README.txt
  • pom.xml - The Maven configuration file. This tells Maven which version of CAS and each library to use and where to find them.
  • src/ - contains our customized source-code and configuration files.
  • target/ - the directory where maven will put the compiled war package.

Building/Running CAS

cd midd-cas/

Update the configuration if needed

vim src/main/webapp/WEB-INF/deployerConfigContext.xml

The configuration file committed to the Git repository on chisel is almost identical to the one in production. If you commit and push changes to this file, then update production, these changes will come through.

The current development configuration (in the source repository) refers to a database on chisel that holds the ticket registry and the services configuration. It is fine to continue to use this database if you wish. If not, you can configure another database. Look for the following lines at the bottom of the deployerConfigContext.xml:

    <bean
id="dataSource"
class="org.apache.commons.dbcp.BasicDataSource"
p:driverClassName="com.mysql.jdbc.Driver"

p:url="jdbc:mysql://anvil.middlebury.edu:3306/db_name?autoReconnect=true"
p:password="password"
p:username="username" />

Build the war package

mvn clean package

Deploy the package

Deploying the package involves stopping tomcat, then deleting the CAS files from its webapps/ directory and putting the new war file in that directory. When tomcat is started, it will extract the various resources from the war file and run the application.

sudo tomcatctl stop
sudo rm -R  /opt/local/share/java/tomcat5/webapps/cas*
sudo cp target/cas.war /opt/local/share/java/tomcat5/webapps/cas.war
sudo tomcatctl start

Deploying to a new production host

Tomcat, MySQL connector, Maven

Install Tomcat, the MySQL connector, and Maven as described above.

Apache

In production, CAS must be run under SSL. Since running Tomcat with SSL support is challenging, we let Tomcat run on its default port (8080) and then run Apache as a proxy with SSL support (listening on port 443).

/etc/httpd/conf.d/ssl.conf

...

ProxyRequests Off
ProxyVia On
ProxyPass               /cas    http://localhost:8080/cas
ProxyPassReverse        /cas    http://localhost:8080/cas

...

Certificates

The CAS application must be able to validate (via Java/Tomcat) the certificates of any client applications that use it. Import certificate authority certificates into the Java environment using keytool. See: https://wiki.jasig.org/display/CAS/Solving+SSL+issues for details.

CAS Source

The new server's ssh key needs to be granted access to the git repository on chisel:

ssh-keygen
cat /root/.ssh/id_rsa.pub

Send Adam Franco the public key contents.

Clone the git repository:

git clone git@chisel.middlebury.edu:midd-cas.git

Configure the CAS server

You can see what configuration has been done on existing CAS hosts by cd'ing to the midd-cas directory and running:

git diff origin/master

There should only be a few lines changed in:

  • src/main/webapp/WEB-INF/cas.properties - The production URL and hostname need to be set
  • src/main/webapp/WEB-INF/deployerConfigContext.xml - The mysql database location will be changed to the production db.
  • src/main/webapp/WEB-INF/spring-configuration/ticketRegistry.xml - On all but one CAS host, the ticket-registry-cleaner needs to be commented out so that the clean-up operations don't collide:
diff --git a/src/main/webapp/WEB-INF/spring-configuration/ticketRegistry.xml b/src/main/webapp/WEB-INF/spring-configuration/ticketRegis
index 96d958e..057841b 100644
--- a/src/main/webapp/WEB-INF/spring-configuration/ticketRegistry.xml
+++ b/src/main/webapp/WEB-INF/spring-configuration/ticketRegistry.xml
@@ -15,6 +15,7 @@
<tx:annotation-driven transaction-manager="transactionManager"/>

<!-- TICKET REGISTRY CLEANER -->
+<!--
<bean id="ticketRegistryCleaner"
class="org.jasig.cas.ticket.registry.support.DefaultTicketRegistryCleaner"
p:ticketRegistry-ref="ticketRegistry"
@@ -41,5 +42,5 @@
p:startDelay="20000"
p:repeatInterval="1800000"
/>
-
+-->

Keep track of your config changes:

After you make changes to the CAS configuration, commit them to the local repository on the production host using git:

git status
git diff
git add file/that/was/changed
git status
git commit -m "Made such and such config change."
You can see a history of changes via
git log
or with git 1.5.6 and later
git log --graph
.

Deploy

Deployment is the same as listed above:

  1. mvn package clean
  2. tomcatctl stop
  3. delete the files from tomcat's <code>webapps/ directory
  4. copy over the war file to tomcat's webapps/ directory
  5. tomcatctl start

On our production hosts, this deploy process has been scripted as a rebuild_cas command:

[root@hostname ~]# cat /usr/local/bin/rebuild_cas
#/bin/bash

cd /usr/local/CAS/midd-cas

mvn clean package
if [ $? -ne 0 ]
then
exit $?
fi

rm /usr/share/tomcat6/webapps/cas.war
rm -R /usr/share/tomcat6/webapps/cas
cp target/cas.war /usr/share/tomcat6/webapps/cas.war
service tomcat6 restart


Note: Sometimes (but not always) Apache will not regain communication with Tomcat after tomcat is restarted. Check this by loading the server-specific URL and restarting apache if needed with service httpd restart
Note: Currently cas will not build on hermes and must be built on mercury and copied to hermes. Do this in workbench/midd-cas-hermes/.

Upgrading CAS to a new version

Upgrading CAS involves editing the pom.xml to refer to the new version, then updating any of our customized files that have changed.

After cloning the repository in a development environment, edit pom.xml and update the cas.version line:

<properties>
<cas.version>3.4.3.1</cas.version>
<hibernate.version>3.5.0-CR-2</hibernate.version>
<spring.version>3.0.5.RELEASE</spring.version>
<commons-dbcp.version>1.3</commons-dbcp.version>
</properties>

to the new CAS version desired. The release versions available are listed in cas-server-webapp/maven-metadata.xml

Troubleshooting Upgrade Issues

Try deploying CAS, if it fails to start, look at tomcat5/logs/catalina.out for what broke.

When updating CAS some of the libraries it depends on may also need their versions bumped in the pom.xml. Look at the pom.xml for the release in question to see what the default library versions are.

Similarly, new versions of CAS may expect to have additional beans configured by default. Look at the and deployerConfigContext.xml for the new release to see what the defaults are and apply them to our deployerConfigContext.xml. Errors like No bean named 'xxxxYyyyyZzzzz' is defined indicate that a new bean needs to be configured.

Saving Your Updates

Once you have successfully updated CAS in your development/testing environment, commit your changes using Git, then push them to the central repository:

[afranco@Walnut midd-cas]$ git status
# On branch master
# Changes not staged for commit:
#   (use "git add <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working directory)
#
#	modified:   pom.xml
#	modified:   src/main/webapp/WEB-INF/deployerConfigContext.xml
#
no changes added to commit (use "git add" and/or "git commit -a")

[afranco@Walnut midd-cas]$ git diff
diff --git a/pom.xml b/pom.xml
index 262e5b4..1da061c 100644
--- a/pom.xml
+++ b/pom.xml
@@ -76,7 +76,7 @@
</dependencies>

<properties>
-               <cas.version>3.4.3.1</cas.version>
+               <cas.version>3.4.8</cas.version>
<hibernate.version>3.5.0-CR-2</hibernate.version>
<spring.version>3.0.5.RELEASE</spring.version>
<commons-dbcp.version>1.3</commons-dbcp.version>
diff --git a/src/main/webapp/WEB-INF/deployerConfigContext.xml b/src/main/webapp/WEB-INF/deployerConfigContext.xml
index 9182ab6..613a21d 100644
--- a/src/main/webapp/WEB-INF/deployerConfigContext.xml
+++ b/src/main/webapp/WEB-INF/deployerConfigContext.xml
@@ -251,5 +251,6 @@
p:url="jdbc:mysql://chisel.middlebury.edu:3306/dbname?autoReconnect=true"
p:password="password"
p:username="username" />
-
+
+       <bean id="auditTrailManager" class="com.github.inspektr.audit.support.Slf4jLoggingAuditTrailManager" />
</beans>

[afranco@Walnut midd-cas]$ git add .

[afranco@Walnut midd-cas]$ git status
# On branch master
# Changes to be committed:
#   (use "git reset HEAD <file>..." to unstage)
#
#	modified:   pom.xml
#	modified:   src/main/webapp/WEB-INF/deployerConfigContext.xml
#

[afranco@Walnut midd-cas]$ git commit -m "Upgraded CAS to verion 3.4.8.
>
> This required adding an audit bean that is now enabled by default."
[master 3aec0ad] Upgraded CAS to verion 3.4.8.
2 files changed, 3 insertions(+), 2 deletions(-)

[afranco@Walnut midd-cas]$ git push
Counting objects: 15, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (8/8), 784 bytes, done.
Total 8 (delta 3), reused 0 (delta 0)
To git@chisel.middlebury.edu:midd-cas.git
af69dbe..3aec0ad  master -> master


Deploying Updates to Production

Updates to the CAS application on the production hosts are performed by fetching the new commits containing the updates from the central Git repository, then merging them with the production hosts's own config changes.

Generally this can be accomplished by:

[user@desktop ~]$ ssh root@hostname

[root@hostname ~]$ cd /usr/local/CAS/midd_cas/

[root@hostname midd-cas]$ git pull
remote: Counting objects: 15, done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 8 (delta 3), reused 0 (delta 0)
Unpacking objects: 100% (8/8), done.
From chisel.middlebury.edu:midd-cas
af69dbe..3aec0ad  master     -> origin/master
Updating af69dbe..3aec0ad
Fast-forward
pom.xml                                           |    2 +-
src/main/webapp/WEB-INF/deployerConfigContext.xml |    3 ++-
2 files changed, 3 insertions(+), 2 deletions(-)

[root@hostname midd-cas]$ rebuild_cas

If there are merge conflicts (such as if a line that was customized in production was also changed in the upgrade), then edit the file to resolve the merge and commit the changes:

[user@desktop ~]$ ssh root@hostname

[root@hostname ~]$ cd /usr/local/CAS/midd_cas/

[root@hostname midd_cas]$ git pull
remote: Counting objects: 15, done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 8 (delta 3), reused 0 (delta 0)
Unpacking objects: 100% (8/8), done.
From chisel.middlebury.edu:midd-cas
af69dbe..3aec0ad  master     -> origin/master
Auto-merging src/main/webapp/WEB-INF/deployerConfigContext.xml
CONFLICT (content): Merge conflict in src/main/webapp/WEB-INF/deployerConfigContext.xml
Automatic merge failed; fix conflicts and then commit the result.

[root@hostname midd_cas]$ vim src/main/webapp/WEB-INF/deployerConfigContext.xml

[root@hostname midd_cas]$ git add src/main/webapp/WEB-INF/deployerConfigContext.xml

[root@hostname midd_cas]$ git commit

[root@hostname midd-cas]$ rebuild_cas

Midd Customizations

Beyond the standard config-file changes, we run two CAS customizations that dramatically lessen the work that our web-applications need to do when authorizing users: user-attributes in the CAS 2.0 protocol response and ancestor group searching. With these two customizations, client application do not need to do any additional work (other than looking at the CAS response) to get the name, email, and group-membership of the user who logged in. This lessens the complexity of client applications and reduces the number of places that infrastructure-specific customizations (such as ancestor group searching) and configuration need to be made.

Attributes in the CAS 2.0 protocol response

The background and usage of this customization is documented in CAS 2.0 Protocol Extension: Attributes.

The implementation of the customization is the addition of the following line to midd_cas/src/main/webapp/WEB-INF/view/jsp/protocol/2.0/casServiceValidationSuccess.jsp

<cas:attribute name="${fn:escapeXml(attr.key)}" value="${fn:escapeXml(attrVal)}"/></c:forEach></c:forEach></c:forEach>

Ancestor Group Searching

In our Active Directory, groups may have other groups as members. This presents problems for applications that wish to authorize users based on group membership since most applications simply look at users' MemberOf attribute list and don't check to see if those groups are in turn members of other groups.

To ease this pain, our CAS implementation solves this problem centrally by recursively searching for ancestor groups are returning the full list of direct and ancestor groups as the users' MemberOf attribute list.

These customizations are located in the PersonDir class-files in midd_cas/src/main/java/org/jasig/services/persondir/support/ldap/.

The custom LdapPersonAttributeDao works just like the normal one, but sets our custom AttributeMapAttributesMapper as the Attribute Mapper to use. In turn, our custom AttributeMapAttributesMapper works like the normal one, except that when it encounters a memberOf attribute, it recursively searches for ancestor groups (while avoiding cycles).

Midd Theme

The middlebury theme files are located at midd_cas/src/main/webapp/WEB-INF/view/jsp/midd2010/.

Run-time Administration

Allowed Services Configuration

Each application that authenticates with CAS needs to be added to the "Allowed Services" list. Currently this list is stored in a database table in the shared database that is also used as the ticket registry.

Services can be managed at: https://login.middlebury.edu/cas/services/

When new services are added, the CAS servers will pick them up within 5-10 minutes.

Troubleshooting Errors

The CAS logs are stored at /usr/share/tomcat5/logs/catalina.out.

To-Do list

  • Multicast Ticket Registry or EhCache Ticket Registry for high availability - Currently the ticket-registry database is a single point of failure. Updating to a ticket-registry implementation that allows each CAS server to validate its peers without a single intermediary will help ensure high availability.
Powered by MediaWiki