Determining Actors and Defining Back-end Stories

Last year I was involved in rescuing the delivery of an Educational Loan Servicing system for a couple of state education departments (the trials and tribulations in another blog). Like a number of projects in the Financial, Insurance, and Investment Banking industry, this project involved a significant number of back-end batch processes.

There were a few problems that the teams faced when attempting to discover the stories:

  1. There was minimal user interface and it wasn’t readily apparent who the users were for these back-office applications.
  2. The team was at a loss on how to define the stories. Most of the stories by themselves provided no benefit — determining business value was difficult.

Let’s tackle these two separately.

Determining Actors

A number of people think of users solely as people. The problem with equating users with people is that the resulting stories are often too big to fit in an iteration and are better classified as epics. The problem of decomposing epics into smaller stories that are defined from a user’s (person) perspective and that provide business value frequently gets teams wrapped around the axle.

The trick is to realize that it is OK to have users who are not people. We can use the RUP concept of an actor to our advantage here.

An “actor” is anything with behavior, including the System Under Discussion (SuD) when it calls upon the services of other systems. Actors are roles played not only by people, but by organizations, software, and machines. — Craig Larman

Craig differentiates between three types of actors:

  1. Primary actor: has user goals fulfilled via the SuD’s services
  2. Secondary actor: provides a service to the SuD, e.g., external credit card authorization service
  3. Supporting actor: has an interest in the behavior of the use case (or epic) but is neither primary nor secondary, e.g., government tax agencies.

Actors can be identified by asking questions like:

  • Who uses the system (SuD)?
  • Who starts and stops the system?
  • Who does system administration?
  • Who does user and security management?
  • Is there a monitoring process that restarts the system on failure?
  • Who evaluates system activity and performance?
  • Who evaluates system logs? Are these remotely retrieved?
  • Who gets paged on errors or failures?
  • Do other systems call on the SuD to do their work?
  • Does the system do something in response to a time event? If so, then Time is an actor.

Identifying the actors and determining their goals allows us to better decompose the stories.

Defining Stories

The loan servicing system involved the creation, transformation, and transfer of a number of file types. These file types had multiple record types within them. The data was exchanged between schools, loan providers, loan guarantors, loan servicers, endorsers, and employers not to mention the students and their parents. These files were often batched for processing, though the desire was to move to “real-time” processing sometime in the future.

The team initially created stories for each step of the loan origination process, like so:

  • Create Stafford Loan from CL4 App/Send
  • Create Stafford Loan from CL5 App/Send
  • Create Stafford Loan from CRC
  • Validate Loan information for Stafford Loan
  • Validate Enrollment information for Stafford Loan
  • Validate Borrower information for Stafford Loan
  • Search for Loan
  • Search for Disbursement(s)
  • Search for School

The way the team approached the work caused a number of issues:

  • Most of the stories were not independent; the team discovered for example that updating a loan from one type of record was very similar to updating it from another. So which story gets the bulk of the work?
  • The team also noticed that if one type of creation story was overestimated, all the creation stories were likely overestimated. After a few iterations, the team really couldn’t say how much work actually remained.
  • The team started working on the creation stories and after a half-dozen iterations realized that they had produced nothing of value; there wasn’t anything complete end-to-end. They had started working layer-by-layer instead of trying to build threads through the system.

A better approach would have been to decompose the epics in a manner that ensured that the resulting stories would adhere to the INVEST criteria (Independent, Negotiable, Valuable, Estimable, Sized appropriately, Testable).

Split by File Type

For example, an epic like “As a loan originator, I want to receive a correctly formatted Stafford Loan application so that I can process the loan request.” could be decomposed by the types of files that contain the loan information:

  • Create Stafford Loan from CL4 App/Send files
  • Create Stafford Loan from CL5 App/Send files
  • Create Stafford Loan from CRC files
  • Create Stafford Loan from CAM files

The backlog would have similar stories for creating other types of Loans: Federal Parent Loan for Undergraduate Student (PLUS loans), GRAD-PLUS loans, etc.

Split by Type of Information within the File

The stories mentioned above are too big to process in an iteration and need to be split further. An option is to do this by the type of formatting of the records in the file.

  • As a loan servicer, I want to generate a CL4 file with the correct header information so that I can route the file correctly.
  • As a loan originator, I want to receive a CL4 file with the correct header information so that I can process the file correctly.
  • As a loan originator, I want to receive a CL4 file with correctly formatted Borrower Detail records (CL4 @1-02) so that I can process those correctly.
  • As a loan originator, I want to receive a CL4 file with correctly formatted Loan entry records (CL4 @1-07) so that I can process them correctly.
  • As a loan originator, I want to receive a CL4 file with correctly formatted Sub/Unsub Reallocation Loan Decrease records (CL4 @1-13) so that I can process them.
  • As a loan originator, I want to receive a CL4 file with correctly formatted Post-Withdrawal Return/Refund records (CL4 @1-28) so that I can refund funds.
  • And so on and so forth

The backlog would have similar stories for creating Stafford Loans from other file types: CL5, CRC, CAM.

Split by Data Filters

Different Actors have different expectations of what data (transactions) is in the file. Some want data for the past week, some for the past 24 hours, some from specific sources, etc. To account for this, we can split based on time periods selected.

  • As Aggieland Credit Union (a loan originator), I want to receive a CL4 file with only the last week of data.
  • As Education Finance Partners of New Mexico (a loan originator), I want to receive a CL4 file twice a month.
  • As First National Bank of Central Texas (a loan originator), I want to receive a CL4 file with only updates made in the last 5 business days.

Split by Scheduling

Stories can be created based on schedules. For example, real-time processing versus end-of-day processing.

  • As Aggieland Credit Union, I want to receive a CL4 file between 8:00 p.m. CST and 8:30 p.m. CST.

“Post-Office” stories that deal with routing and file transfers could be another batch of stories.

Order of Stories

Even with this splitting-up of stories it is critical that the team select stories wisely. Remember that you want to: deliver quick increments of value and exercise the system end-to-end as soon as possible.

Stories selected should attempt to build threads through the system; don’t tackle all the file generation stories, then all the validation stories, then all the dissemination stories. Instead, choose a file type and record type and try to build that one through. Don’t worry about alternatives initially; just get the happy-path done quickly. Implement the alternate paths next to make the functionality robust.

What Not to Do

Do not split along process lines.

  1. Design the thing
  2. Code it
  3. Write unit tests
  4. Write acceptance tests
  5. Document the design and implementation details

This doesn’t work very well as none of the five items produces value by themselves.

Don’t split across architectural lines either.

  1. Design
  2. Code the UI
  3. Implement the business logic
  4. Implement the data layer
  5. Write acceptance tests
  6. Document the design and implementation details

The items by themselves don’t provide value and like waterfall projects, integration is pushed to the end instead of attempting it earlier.

Helpful Tools and Additional Information

Irrespective of how the stories were created, the teams in this case very quickly realized that use of a tool like Fitnesse was crucial. It allowed the teams to mock up files and records that they could develop against and use for automated functional testing.

  • Unlike requirements captured in large formal documents, Fitnesse based requirements are executable.
  • Execution results are concrete — tests either pass or fail, thereby providing a true indicator of feature and project status.
  • Tests can be used as a starting discussion point for iteration planning — tests are demonstrable.
  • Fitnesse provided independence from a database
    • Database state changes can make testing difficult — testers must revert to the initial state of the database after changes.
    • Testing hard to find situations (in real-time databases) becomes easier as data is manually setup.
    • Executing hundreds and thousands of tests can become slow if accessing data from a database. This makes quick feedback, which is highly desirable, impossible.
  • Processes can be tested via workflow tests

Good places to get additional information are Craig Larman’s, “Applying UML and Patterns” for a discussion on actors and their goals and Mike Cohn’s, “User Stories Applied” for guidance on writing good user stories. Fitnesse related information can be obtained from Rick Mugridge and Ward Cunningham’s “Fit for Developing Software: Framework for Integrated Tests” and from http://www.fitnesse.org/