PhotoCMS in Alfresco Share

By Chris Dixon

Photo Management is a key component to Web Content Management. However, managing photos for multiple platforms can become a tangled mess of photo sizes,  duplications, and renditions.

PhotoCMS, built upon Alfresco Share 4, using ImageMagick, and Jquery plugins simplifies Photo Management for the Enterprise.

Alfresco Share [v4+] makes for a great platform with the Mozilla rhino JS engine built-in, JQuery and its numerous plugins can be integrated for advanced tools to supplement ImageMagick, which is also an Alfresco bundled application.

To manage the generation of numerous photo renditions, we make use of ImageMagick’s integration with Alfresco. Using a Spring context files we define each rendition:

<bean id=”Rendition40x40″ class=”com.src.repo.rendition.PhotoRendition” parent=”basePhotoRendition”>
<property name=”identifier” value=”40×40″/>
<property name=”sizeValue” value=”40×40″/>
<property name=”applicableSites”>
<list>
<value>showname</value>
</list>
</property>
</bean>

At one of our clients (a large Media Company) we have implemented Alfresco Share 4 with our WCM enhancements and PhotoCMS solution to manage their array of television shows.

They are currently using the PhotoCMS for 80 renditions per show. 80 renditions per image may seem like a lot, but thanks to ImageMagick, creation of renditions is quickly completed and ready for user review.

Using ImageMagick and the photo’s EXIF properties, we can identify the photo’s orientation before making a crop, to minimize heads being cutoff.

Once all renditions are generated a user needs a way to review and adjust the renditions; a banner ad style rendition is very different from portrait cut rendition.

The Photo Rendition page provides the user the opportunity to review the generated rendition and launch the ReCropping tool or the Override Thumbnail tool, if needed.

Screen Shot 2013-01-08 at 11.28.13 AM

Clicking on the given size updates the rendition image shown for review. If necessary the rendition can be recropped by launching the recrop tool. Alternatively, the Override Thumbnail page can be launched, allowing the user to replace the rendition with a custom photo of the same size.

Screen Shot 2013-01-08 at 11.33.10 AM

Clicking a given size launches a photo picker where a separate image can be selected or a new image can be uploaded.

Adding the ability to recrop a photo within Alfresco shortens the business process necessary to generate a rendition. The recrop tool integrates the Jquery Resize and Crop plugin from Cedric Gampert (https://github.com/trepmag/jrac) providing users the ability to create an updated rendition directly within the CMS to capture the exact rendition wanted.

Recrop Photo

The thumbnails drop down is populated by the previously mentioned context file where renditions are defined by site, minimizing the rendition sizes. The draggable yellow box is locked to the thumbnail (rendition) size selected. The bottom bar allows the user to zoom in and out.

Zoom Example

Upon Submit the zoom level, X and Y coordinates and thumbnail size are submitted to ImageMagick. The updated rendition is generated and saved as an associated rendition on the original image.

Once the image and its rendition have been sorted and approved, they may need additional details associated with them, ie. metadata. All photos have additional metadata fields (specific to the show or purpose) including the ability to be tagged and categorized into Photo Albums.

PhotoCMS Albums are actually specialized Alfresco categories.

Screen Shot 2013-01-08 at 7.43.55 PM

Specialized Category

This specialized category is just text data consisting of a collection of node IDs and additional metadata (tags, custom fields, ordering etc.) without the need to create additional copies of each image and its renditions.

Sample Album Metadata

This allows a single photo to exist only once in the PhotoCMS, but can be in an unlimited number of albums.

Album management needed to be a simple task, using JavaScript we are able to provide drag and drop capabilities to add photos to albums and manage the ordering of photos within an album.

Add to Album

While the focus of this blog post is on Photo Management within the CMS, we also have developed Deployment for photos to netstorage to support web browsing and web apps on multiple platforms. Metadata is deployed, in our media customer’s case, to a custom API in JSON format. This JSON houses all of the Photo’s (or Album’s) metadata and rendition sizes and netstorage URLs. Refer to the WCM on Share blog posts for details on Preview vs Production capabilities.

 

Ixxus and Rothbury Software Merge to Create World’s Leading Alfresco Partner

LONDON, UK – 6th November 2012 – Ixxus and Rothbury Software today announce their agreement to merge, bringing together the two best Platinum Alfresco Partners to create the clear worldwide leader dedicated to delivering robust, open source, information management solutions.

Under the terms of the deal Ixxus will be acquiring Rothbury Software, with Malcolm Teasdale, the Rothbury founder assuming the role of President and CEO of the North American arm of the combined business and also joining the board of Ixxus Group. The acquisition will provide enhanced services for both existing and new clients of both companies, with a substantially enlarged global resource pool, delivering systems integration and content services across an extended set of core technologies including Alfresco, MarkLogic, Apache Lucene/Solr, MongoDB and Drupal. The Ixxus-Rothbury synergy is built around a similar company ethos and values, along with compatible sector expertise and focus.

Steve Odart, Joint CEO and founder of Ixxus, commented: “The merger of our two companies truly delivers on that old saying ‘the whole is greater than the sum of its parts’. Malcolm has built an excellent company, with fantastic staff, a formidable client base of loyal customers, and a reputation for excellence. From Ixxus’ standpoint, we now have a proven partner to accelerate our growth in the US, enabling us to capitalise on the growing demand from global publishers and financial service companies for Alfresco based solutions.

Ixxus has enjoyed rapid growth over recent years and is now the premier Alfresco Partner working across Europe. With Alfresco’s emergence as the leading open source global content management platform, Ixxus is trusted to deliver true enterprise grade content platforms around the world. Rothbury has a strong reputation for consistently delivering high-quality Alfresco projects across the United States. Rothbury was a natural partner for Ixxus due to its high calibre of open source expertise and existing Platinum Alfresco Partner status, coupled with its expert knowledge and client relationships in the publishing and media and finance sectors.

Malcolm Teasdale, founder of Rothbury, said: “I am really excited about the opportunities this merger creates. The combined business has much greater depth and breadth of resources which will benefit our clients and our staff. Between us we’ve probably delivered more Alfresco projects than any other company in the world and the publishing and social media platforms Ixxus has built will be of real interest over here in the US.”

The two companies believe that their combination will create one holistic team with unrivalled Alfresco skills across both Europe and the US, and a global Alfresco Partner that is second to none. The merged company will be led by a Global Management Team made up of the two Ixxus CEOs, Steve Odart and Paul Samuel, and Rothbury founder Malcolm Teasdale. Ixxus will remain in its current head office location in Central London, as well as maintaining its office in Romania. Rothbury will continue to be based in West Newton, Massachusetts, with all members of the current Rothbury team joining the merged business.

About Ixxus
Headquartered in London and with an office in Romania, Ixxus is an experienced global consultancy and systems integration organisation. The company specialises in the design, development and operational support of enterprise content management and search solutions, with a modern and innovative approach to project delivery.

The company has worked across all industry sectors; including publishing and media, finance, higher education, government, legal and manufacturing.

About Rothbury Software
Rothbury Software is an industry-leading consulting company focused on providing software engineering and web application development. It is a Platinum-level Partner with Alfresco and has had a long association with the company and product since 2007. The Rothbury team utilises MongoDB and Alfresco content management system as well as a multitude of HTML5/JavaScript frameworks to deliver innovative and highly effective solutions that are consistently delivered on time and on budget.

WCM on Share Part II

By Chris Dixon.

All Arise!

WCM management within Alfresco Share brings the best of two worlds together in a simple and easy to use interface, making users feel welcomed and empowered rather than lost. We have been dutifully working on bringing the AVM features to WCM on Share and enhancing the experience with easier to use Alpaca (http://alpacajs.org/) forms, strong merge capabilities and numerous deployment options.

  • Alpaca forms integration brings a new and fresh look to content creation with calendars, restricted pickers, ordering, and so much more using the existing WCM XSDs to build the new forms, storing the data in JSON and XML format!

 

We have also invested time in providing Deployment capabilities to numerous options

  • Alfresco FSTR

  • Custom APIs 

or even directly to MongoDB in JSON format.

We have extended our custom dashlet library to include a few dashlets to provide additional control over your Content, Snapshots, and System.

  • Landing Pages for Custom Content Types

  • Inactive Content Publishing

  • Repo Auditing

We continue to build and grow the WCM on Share interface with new and exciting features to bring Web Content controls to the ECM world. Our clients are thrilled with the progress of WCM on Share and the ease of use it brings to content creation and control.

Alfresco and MongoDB

by Harry Moore

Why Alfresco

  • Management of structured and document based content.
  • Metadata.
  • Custom repository services.
  • Aspects, custom models and behaviors.
  • Workflow.
  • Templating (Form based structured content).
  • Metadata.
  • Web Scripts (The greatest custom API framework – EVER!).
  • Gets your content ready!

Why MongoDB

  • Flexible (No Scheme) storage structure.
  • BSON (binary JSON) “Document” storage “feels” more natural than flat tables and foreign keys.
  • High traffic, web-ready Scaling:
    • Read scaling with Replica Sets.
    • Write scaling and distributed data with sharded clusters.
  • GridFS interface for storing large files.
  • Develop faster, Deploy easier, Scale bigger
  • More than a dozen supported language drivers (even more community supported drivers) – http://www.mongodb.org/display/DOCS/Drivers
  • Many large production deployments: Disney, Forbes, shutterfly, craigslist, MTV, sourceforge, SAP, and more – http://www.mongodb.org/display/DOCS/Production+Deployments

Why would you use Alfresco and MongoDB together

Rothbury Software has been an Alfresco Platinum Partner since 2006. Alfresco’s proven Content Management system provides a collaborative environment for content creation and control.

Rothbury also recently partnered with 10gen, the creators of MongoDB. MongoDB is a highly scalable and flexible storage solution. This combination provides for control of your content repository and the ability to get your content to a LOT of people, anywhere in the world, very quickly; i.e. publish your content on the web.

Together, Alfresco and MongoDB offer the best enterprise level technology stack for authoring and delivering content to the web. Maybe you don’t want your entire content repository exposed to the web. You want to deliver content to selected channels. Ex.:

  • Components of a campaign targe ted to the web
  • Product related downloads
  • Merge transactional data with web content
  • Mobile – deliver the content but let the web site worry about presentation

How to Deploy Content from Alfresco to MongoDB

There are several options available:

  • Push approach – Alfresco hosted code updates MongoDB from a custom Alfresco Action using the MongoDB Java driver.
  • Pull approach – Standalone application pulls content from Alfresco (download servlet, CMIS API, custom Web Script, etc) and updates MongoDB

The push approach would work well in situations where you want to deploy individual pieces of content as they change (maybe from a behavior policy bound to an add Aspect event). We’ll look at an example of how to “push” in this article.

The Pull approach works best in batch situations where you need to deploy many content updates at once. Probably want to schedule the deployments too. You want to off load this heavy lifting of the deployment to another process/application server. Look for a future article for an example of a “Pull”.

Here is an example of a push

The method used to push documents from Alfresco to MongoDB should be flexible. We may want to deploy from a workflow, an action or triggered when a property changes. I’m going to make the deployment component a “service” and expose it as a root scoped JavaScript object named ‘mongoService’.

This is an example. Not a production ready solution. For example, you wouldn’t want to open a new connection to the database each time you wanted to insert a document.

I’ll start off by creating an interface, MongoService, that describes my service. I’m going to have a method on my service named “insert”. It will actually perform and “upsert” in MongoDB. An upsert is similar to an insert but will create the document if it does not already exist. If the document does exist it will be replaced with the document we are inserting.

Now create the implementation class MongoServiceImpl.java. You’ll notice that the code that sets the content of the MongoDB document checks the size of the Alfresco node’s content and if it is larger than 1 megabyte it will stream the content to GridFS using nodeRef.toString() as the name of the GridFS file so it will easy to find later. If the content is less than 1 megabyte it is inserted directly into the document as a string. It will up to the client application reading the documents from MongoDB to determine if it needs to go.

GridFS is a very nice feature of MongoDB, which lets you store files.

Implement the interface (the “insert” method):

There are several constructors to choose from when creating a MongoDB. I chose the one that takes a List of ServerAddress objects. Using a list of ServerAddresses I can give the Java driver a list of “seed” nodes to choose from if it should lose a connection. In a ReplicaSet scenario, you really only need to connect to one of the mongod processes in the cluster and the driver will grab all the cluster information it needs from that one server to reconnect if the master should change (via an election process. A seed list is useful for Sharded clusters to give the drive a list of the mongos processes. The Java driver will determine which of the mongos processes to use.

Get a reference to the database server: mongo = new Mongo()

Setting the WriteConcern to SAFE tells the driver to wait for a confirmation that the data was written to at least the primary node in the cluster. This is slows down writes but lets you detect errors so you will know if your data made it to MongoDB.

mongo.setWriteConcern(WriteConcern.SAFE);

There are several options for configuring WriteConcern; including WriteConcern.NONE which is a fire and forget. You will never know if your data made it to the database.

Get a DB object: DB mongoDb = mongo.getDB(database);

Note that the database does not need to exist on the MongoDB server to get a reference to it. MongoDB will create the database (and the collection for that matter) the first time you write data to it.

Now the collection:

DBCollection dbCollection = mongoDb.getCollection(collection);

Build the BSON document to send to MongoDB

Note the use of Alfresco’s nodeRef string as the document _id. This way I don’t have to store another identifying value in Alfresco to find the correct document to update later on. You can see this in the dbCollection.update. The first argument is a selector used to find any existing document to update:

dbCollection.update(new BasicDBObject(“_id”, nodeRef.toString()), document, true, false);

In addition to passing the document we built from the nodeRef properties, the third argument says to perform an upsert. Otherwise we would need to do an insert the first time a document is deployed. We are working with a single document at a time so the final argument (set to false here) specifies we are not performing a “multi” update.

Local helper methods

Get a list of the tags (or categories) applied to the Alfresco document. This method will return a list of the names of the tags (or categories); not their paths. So it works well for tags but for categories we would probably want to construct a path from the root category in the classification:

If the content is large (I arbitrarily chose 1 megabyte) then store the content using MongoDB’s GridFS interface. The data is still in the same MongoDB database but in a different collection, two collections, actually. One to store the metadata for the document and another to store the file’s content broken into “chunks”:

Get an input stream to read the Alfresco node’s content. Get a GridFS object and use it to create a GridFSInputFile file from an input stream. Set the file name to the nodeRef String. Then save the GridFSFile.

For content less than 1 megabyte in size just get it as a String and store it directly in the document in a property named ‘content’. If the ‘content’ property is expected to grow over time with subsequent updates then it would be best practice to store the content in another collection and use a DBRef (similar to a foreign key) in the ‘content’ property here. This is because MongoDB tries to update a document in place if it can, which is very fast. However if the new document is larger than older one, MongoDB may need to move things around or even allocate more space for the collection (an expensive operation).

House keeping. These give Spring a place to inject our dependencies.

Create the Spring bean that will expose our service to JavaScript

ScriptMongoService.java:

Create the Spring context

Finally, we need to wire up the Spring beans to expose the service to JavaScript: rs-mongodb-repository-context.xml:

Test Script

We’ll need a script to run the service. Create the following script in Data Dictionary/Scripts test-mongo.js:

Build and deploy into Alfresco:

You will need the MongoDB official Java driver in your classpath. You can download it from github: https://github.com/mongodb/mongo-java-driver/downloads

Copy the jar file to Alfresco’s webapp. Ex.: {tomcat}/webapps/alfresco/WEB-INF/jar or package it into an AMP.

Compile and deploy the jar into Alfresco and restart.

Create a space rule to run the script:

Create a space rule on a folder in Share. The rule should fire the script action with the above script whenever a document is modified and it has the tag “mongo” applied.

Start MongoDB

Create a configuration file. This is not necessary if you run with all defaults. You don’t want to run with smallfiles and noprealloc in production.

/etc/mongodb.conf:

Start MongoDB from a terminal:

Deploy some content

Create some content and apply the tag:

 

Check the database for the data

Log in to MongoDB and run a query:

If you sent a file whose content is larger than 1 megabyte you will see ‘null’ for the ‘content’ property. Look in the alfrescoLargeFiles.files and .chunks collections in the mongo shell:

Note that you will not see these files if you use the mongofiles clt because that tool assumes the default bucket name “fs” but we used “alfrescoLargeFiles”. See https://jira.mongodb.org/browse/SERVER-1970

Conclusions

I have worked with Alfresco since 2006 and it has been a blast. There isn’t much you can’t do with it, integrate it with, or build on top of it. MongoDB is making a big splash in the “Big Data” market. Events have been packed with interested technology managers and developers. Rothbury Software believes these technologies complement each other in a way that will benefit current and future clients. We have several active projects using Alfresco and MongoDB.

Look for a future article on how the document data authored in Alfresco and deployed to MongoDB can be consumed by mobile apps.

WCM in Share: Snapshots/Rollback/versions

By Livern Chin

Snapshots

With the lack of snapshoting supports in alfresco share, we have implemented a snapshot approach to support change tracking to the share site as whole. This approach relies heavily on the versioning aspect of share and must be enabled in order for snapshoting to work successfully.

Snapshots are created as part of a business workflow process.  They can be deployed to various file transfer receivers once successfully created.

Snapshots are implemented as a collection of specializedType content nodes called snapshotItem under a specializedType folder called snapshot in our custom site model. A snapshot folder will have custom properties to describe the snapshot itself such as id, label, snapshot state, its creator and others. A snapshot item is a content node that stores the nodeRef to the content node it represents and its current version in the source container.

We have also implemented a custom WebProject type which we represented as a specializedType folder called workarea.  Our custom share site can have one or more workareas, which will have its own snapshots collections.

The process for creating a snapshot and snapshot revert are implemented as a java service in Alfresco.  The createSnapshot method is expecting a site noderef and a workarea (“WebProject”) name the snapshot is to be created in, a label that describes the snapshot, and the state of the snapshot. In our implementation, the state is used to indicate if a snapshot is an approved snapshot for production deployment use or if it has been successfully deployed to production.  With these provided parameters, a snapshot folder will be created in the specified site’s workarea with the proper label and state.  Each noderef and version of the specified site’s workarea descendant will be used to create a snapshot item representing its source node and the version of its source node at the time this snapshot is created.

Here are snippets of the create snapshot method as mentioned above.

public NodeRef createSnapshot(NodeRef siteNodeRef, String sourceName, String label, String status) {
//determine the snapshots container, and the source container for the snapshot, based on the sourceName
final NodeRef snapshotContainerNodeRef = getSnapshotContainerNodeRef(siteNodeRef, sourceName);
NodeRef snapshotSourceNodeRef = null;
//lookup the snapshotSourceNodeRef by siteNodeRef and sourceName
snapshotSourceNodeRef = getSourceNodeRef(siteNodeRef, sourceName);

// collect all the nodes under source as a flat list.
final Snapshot snapshot = new Snapshot();
snapshot.setSnapshotLabel(label);
snapshot.setSnapshotStatus(status);
snapshot.setSnapshotId(String.valueOf(getNextSnapshotId(snapshotContainerNodeRef, true)));

String rootPath = Util.getRootPath(snapshotSourceNodeRef, nodeService, permissionService);

final List  snapshotItems = collectItemsForSnapshot(snapshotSourceNodeRef, rootPath);

long snapshotCreateStart = 0L;
….
// save the snapshot to the repository.
Map  snapshotFolderProps = snapshot.toNodeProperties();
String folderName = (String) snapshotFolderProps.get(ContentModel.PROP_NAME);
FileInfo snapshotFolder = fileFolderService.create(snapshotContainerNodeRef, folderName,
      WebContentModel.TYPE_SNAPSHOT);
NodeRef snapshotNodeRef = snapshotFolder.getNodeRef();
nodeService.addProperties(snapshotNodeRef, snapshotFolderProps);

// the items in this snapshot are simply the items in the snapshotSourceNodeRef
  for (SnapshotItem item : snapshotItems) {
  // snapshotItem is a subtype of cm:content.
    Map  itemProps = item.toNodeProperties();
    String itemName = (String) itemProps.get(ContentModel.PROP_NAME);
    FileInfo repoItem = fileFolderService.create(snapshotNodeRef, itemName,
      WebContentModel.TYPE_SNAPSHOT_ITEM);
    nodeService.addProperties(repoItem.getNodeRef(), itemProps);
  }
return snapshotNodeRef;
}

Snapshot Revert

The revert method will revert an entire target workarea specified in the method paramater targetContainerNodeRef to the state of the snapshot specified.  In our implementation, we chose to first remove the target workarea, then restore each node from the version store by looping through the snapshot items registered in the snapshot and revert each node to its stored version property value.

// for each item:
for (SnapshotItem item : snapshotItems) { 
   String[] pathElements = StringUtils.split(item.getOrigPath(), '/');
   NodeRef itemParent = ensureRelativeFolderPath(targetContainerNodeRef, 
Arrays.asList(pathElements));
 
   VersionHistory history = versionService.getVersionHistory(item.getItemNodeRef());
   if (history == null) {
        continue;
   }

   Version version = null;
   try {
         version = history.getVersion(item.getVersion());
    } catch (VersionDoesNotExistException e) {
          continue;
}
 
    NodeRef versionStoreNodeRef = version.getFrozenStateNodeRef();
    String fileName = item.getOrigName();
    if (fileName == null) {
       fileName = (String) nodeService.getProperty(versionStoreNodeRef, ContentModel.PROP_NAME);
    }

    QName assocQName = QName.createQName(NamespaceService.CONTENT_MODEL_1_0_URI, fileName);
    if (nodeService.exists(item.getItemNodeRef())) {
        logger.warn(item.getItemNodeRef() + " unexpectedly already exists.");
    } else {
        NodeRef restoredNode = versionService.restore(item.getItemNodeRef(), itemParent,
        ContentModel.ASSOC_CONTAINS, assocQName, false);
        revertNodeToSnapshotVersion(restoredNode, item);
        nodeService.setProperty(restoredNode, ContentModel.PROP_NAME, fileName);

     }
}

Deploying a snapshot

A snapshot can be used as a source for deployment. In order for us to deploy a snapshot, we have staging spaces allocated for each site workarea as need be to prepare content for deployment based on the snapshot items.   Content nodes in the staging space are prepared by creating or updating the existing staging space node from the version store based on the snapshot item registered noderef and its stored version property.  A deployed snapshot will be tracked in the target staging space to improve performance for future deployments.

private NodeRef createOrUpdateInStagingSpace(final NodeRef folderNodeRef, final SnapshotItem snapshotItem) {
  NodeRef sourceNodeRef = snapshotItem.getItemNodeRef();
  String desiredVersion = snapshotItem.getVersion();
  String relativePath = snapshotItem.getOrigPath();
  long versionLookupStartTime = 0L;

  VersionHistory history = versionService.getVersionHistory(sourceNodeRef);
  Version version = history.getVersion(desiredVersion);

  NodeRef versionStoreNodeRef = version.getFrozenStateNodeRef();

  ContentReader versionReader = contentService.getReader(versionStoreNodeRef, ContentModel.PROP_CONTENT);
  if (versionReader == null || !versionReader.exists() || versionReader.getSize() == 0L) {
    return null;
  }

  ContentData versionContentData = versionReader.getContentData();

  return putContentInStagingSpace(folderNodeRef, relativePath, versionContentData, fileName);
}

Once the staging space is updated with the nodes represented in the snapshot items, it is ready for a file system transfer receiver deployment.

Website Content Management on Alfresco Share

By Chris Dixon with lots of thanks to our developers.

WCM is dead. Long live WCM on Share.

Rothbury has successfully implemented Alfresco’s WCM at numerous clients who are happily managing their web site content within the AVM context. With the grandfathering of the AVM, these clients are on a dead end road in terms of upgrades and the ability to make use of the feature rich Alfresco Share. Where do they want to go? Alfresco Share brings a great user experience to ECM, why not to WCM?

We have been dutifully working on bringing the AVM features to WCM on Share, providing an upgrade path for our clients.

• Managing multiple releases within a single site

• Deployment to Development, QA and Scheduled synchronous deployment to Production

• A long requested enhancement to WCM was the ability to merge from a sandbox
and see the differences, We are providing that ability with Releases and Merging with Enhanced Diff Comparison with text file editing and image preview:

We have created a few custom dashlets to provide additional control over your Releases, Snapshots, and Deployment endpoints and bring the ease of use straight to the Site Dashboard.

The Deployment endpoints (FTRs) for development provide for testing of releases, we have provided a dashlet to provide a status of the endpoints and who is controlling it. While Development and QA are limited to single controlled deployments, Production File Transfer Receivers (using Alfresco’s FSTR server) allow for scheduling the website updates in a single synchronous deployment, providing control of

• Workflow control from QA to Production.
• Expedited workflow for Hotfixes
• Deployment and Reverting of Snapshots

The feature set of WCM on Share continues to improve as we iterate through milestones and testing by QA.

What feature do you want to see in WCM on Share?

Automated Testing with Alfresco

by Bobby Johnson

In my time working on Alfresco projects, I’ve ended up writing quite a few policies and custom actions, as well as other Java components. The typical way to test these components out is to deploy your code into a local application server, start Alfresco, try things out manually, and then rinse and repeat until your code is working as expected. If you’ve implemented a custom repository action, you may have seen the wiki page that mentions testing actions, which provides a code example for a simple test extending a class called BaseSpringTest.[i] The section doesn’t provide quite enough information to get started, however it is possible to write Java tests for your Alfresco components that run in the context of a live Alfresco repository. I’ll give some details on how to get started and the advantages and disadvantages of automated testing inside an Alfresco repository.

The wiki code for testing custom actions provides a very basic example of what can be done when writing tests using this method. Here are a few of the things you can do once within this testing context:

  • Access any Spring bean in the application context, including your own.
  • Manipulate the Alfresco repository any way you can within a custom action or other Java component, including creating and updating nodes, executing other actions, etc.
  • Make assertions on the state of the repository after executing code.

This is all possible because Alfresco uses the Spring Framework. The entire Spring application context for Alfresco can be initialized outside of a web application, and that is what the base test class org.alfresco.util.BaseSpringTest helps us to do. If you have looked through the Java sources included in the Alfresco SDK, you’ve seen that Alfresco has a large number of test classes extending this class for their own testing. Their test classes also provide useful examples of setting up the repository state to test more complicated scenarios.

Everything you need to run Spring context JUnit tests is actually included in the SDK. You’ll probably want to use Apache Ant or another build tool to get your testing classpath setup appropriately, specifically including:

  • The Alfresco JARs under lib/server, and all dependencies under lib/server/dependencies
  • Alfresco’s default Spring context and other configuration files (models, database scripts, etc.) This is found in lib/server/config.
  • Your custom Spring context and all the code you want to test, including any custom models, workflows, etc.
  • Extra Alfresco configuration, including alfresco-global.properties and log4j.properties to override the defaults that come with the SDK.

Here are snippets of an appropriate Ant classpath and JUnit task configuration to get you started:

<!-- the alfresco SDK JARs. -->
    <path id="classpath.alfresco.sdk">
        <fileset dir="${alfresco.sdk.dir}/lib/server">
            <include name="**/*.jar" />
            <exclude name="**/ant-1.7.1.jar" />
            <!-- 4.0.0 sdk seems to have duplicate files under server/bin. -->
            <exclude name="bin/*" />
        </fileset>
    </path>

    <!-- we test against the SDK, plus some other config and our source code, of course. -->
    <path id="test.classpath.path">
        <fileset dir="lib">
            <include name="*.jar" />
        </fileset>

        <!-- we want the classpath for config file to be alfresco/..., not config/alfresco, so we need a separate path element. -->
        <pathelement location="${alfresco.sdk.dir}/lib/server/config" />

        <!-- tests depend on our non-test code. -->
        <pathelement location="${project.dir}/classes" />
    </path>

<!-- The Alfresco app context has the same requirements for heap and PermGen size that it does when run in an app server. -->
        <junit printsummary="yes" haltonfailure="false" fork="yes" showoutput="yes" maxmemory="768m">
            <jvmarg value="-server" />
            <jvmarg value="-XX:MaxPermSize=256M" />
            <jvmarg value="-Dcom.sun.management.jmxremote" />

            <classpath>
                <pathelement location="${test-classes.dir}" />
                <path refid="classpath.alfresco.sdk" />
                <path refid="test.classpath.path" />
            </classpath>
            <formatter type="xml" />

            <batchtest fork="yes" todir="${test-reports.dir}">
                <fileset dir="${test.dir}">
                    <include name="**/Test*.java" />
                </fileset>
            </batchtest>

        </junit>

Because running your test will start up a full Alfresco application context, you need to provide Alfresco with a real database and file system location to hold the content store and indexes, just like running the Alfresco web application. Using an embedded database like hsqldb seems to be problematic[ii], so I ended up using a new database on the same MySQL server I run Alfresco with locally. Once you run your test with the classpath setup correctly, Alfresco will startup, and for each of your test methods, JUnit will call onSetUpInTransaction() and then run the test method. Note that each of your tests run in a new rollback transaction by default, so your changes to the repository won’t persist across tests.

Purely as an example, here is the basic structure of what my test classes look like (package and imports omitted for brevity):

public class TestExampleSpringTest extends BaseSpringTest {
    private AuthenticationComponent authenticationComponent;
    private NodeService nodeService;
    private Repository repositoryHelper;

    @SuppressWarnings("deprecation")
    @Override
    protected void onSetUpInTransaction() throws Exception {
        super.onSetUpInTransaction();

        // grab any spring beans we need. You may wish to test against protected
        // services (e.g. 'NodeService')
        this.nodeService = (NodeService) applicationContext.getBean("dbNodeService");
        this.authenticationComponent = (AuthenticationComponent) applicationContext.getBean("authenticationComponent");
        this.repositoryHelper = (Repository) applicationContext.getBean("repositoryHelper");

        // Set the current authentication to the admin user.
        // this is the easiest way to get an authenticated for manipulating the
        // repository.
        this.authenticationComponent.setCurrentUser("admin");
    }

    public void testExampleTest() {
        // normally you'd test your own custom code here.
        NodeRef companyHome = repositoryHelper.getCompanyHome();
        String fileName = "myFile.txt";
        QName assocQName = QName.createQName(NamespaceService.CONTENT_MODEL_1_0_URI, fileName);
        Map fileProperties = new HashMap();
        fileProperties.put(ContentModel.PROP_NAME, fileName);
        ChildAssociationRef newNodeAssoc = nodeService.createNode(companyHome, ContentModel.ASSOC_CONTAINS, assocQName,
                ContentModel.TYPE_CONTENT, fileProperties);

        NodeRef newFileNodeRef = newNodeAssoc.getChildRef();
        assertNotNull(newFileNodeRef);
        assertEquals("file name should be correct", fileName,
                nodeService.getProperty(newFileNodeRef, ContentModel.PROP_NAME));
    }

}

As a big fan of automated testing, I find the ability to test my code within the context of a real Alfresco repository invaluable. At Rothbury Software, we use Jenkins for continuous integration, and being able to run these robust tests of our code as part of our build process has many advantages. There is a cost in terms of test run times, as each test class needs to start up a fresh Alfresco context. However, the ability to continually validate functionality enables things such as refactoring without fearing regressions. This could also be advantageous if you want to use a Test Driven Development process for Alfresco development. I hope you’ve found this useful if you’ve ever been curious about the example test for custom actions, or wanted the ability to fully exercise your custom Alfresco code as part of a Continuous Integration process.


[i] http://wiki.alfresco.com/wiki/Custom_Actions#Testing_the_action

[ii] https://issues.alfresco.com/jira/browse/ALFCOM-3691