Select Page

One page installation report of any workflow

You can define this as its own job in the “Job Library” folder and include it as a last step of any workflow you want to generate a one page report for. Create a job with one step that does Miscellaneous -> Evaluate Script.

Then, paste in the beanshell script as such:

Now you can include this step in any workflow, and it will create a report in the “Reports” tab of the build-life dashboard called “Installation Report”.

import com.urbancode.anthill3.domain.jobtrace.*;
import com.urbancode.anthill3.domain.workflow.*;
import com.urbancode.anthill3.domain.buildlife.BuildLife;
import com.urbancode.anthill3.domain.buildlife.BuildLifeFactory;
import com.urbancode.anthill3.domain.workflow.*;
import com.urbancode.anthill3.domain.agent.*;

import com.urbancode.anthill3.runtime.*;
import com.urbancode.devilfish.services.*;

import java.io.*;
import java.util.*;
import com.urbancode.commons.fileutils.FileUtils;


private String getStreamAsString(InputStream inStream)
	throws IOException {
	StringBuffer result = new StringBuffer();
	try {
		byte[] buffer = new byte[4096];
		int length = 0;
		while ((length = inStream.read(buffer)) > 0) {
			result.append(new String(buffer, 0, length));
		}
	} finally {
		try {
			inStream.close();
		} catch (Exception e) {
		}
	}
		return result.toString();
}

WorkflowCase workflow = WorkflowLookup.getCurrentCase();
JobTrace[] jobTraces = workflow.getJobTraceArray();
JobTrace jobTrace = JobTraceLookup.getCurrent();
String publishPath = VarService.getInstance().resolve(PublishPathHelper.getInstance().getPublishPath(jobTrace, "Installation Report"));
FileUtils.assertDirectory(publishPath);
File reportFile = new File(publishPath, "installation-report.html");
BufferedWriter writer = new BufferedWriter(new FileWriter(reportFile));

writer.write(
	"\n" + "\n" +"\n" +
	"" +
	"\n" + "\n" + "\n" + "\n" + "\n"
);

for (int j=0; j<jobtraces.length; j++){="" agent="" thisagent="jobTraces[j].getAgent();" writer.write(="" "<tr=""></jobtraces.length;>"
	);
	StepTrace[] stepTraces = jobTraces[j].getStepTraceArray();
	for (int s=0; s<steptraces.length; s++)="" {="" writer.write("="" <tr=""></steptraces.length;>
		");
		
		try{
			CommandTrace cmdTrace = stepTraces[s].getCommandTraceArray()[0];
			FileInfo outputFile = LogPathHelper.getInstance().getLogFileInfoArray(cmdTrace)[0];
			InputStream inStream = FileInfoService.getInstance().getFileInfoAsStream(outputFile);
			String output = getStreamAsString(inStream);
			writer.write(
                                 // change /pr3 to /pre below. Thanks!
				""
			);
		}catch(Exception e) {
			writer.write(
				""
			);
			continue;
		}
	}
}
writer.write("
\n” + ”

Installation Report

\n” + “

JOB NAME: ” + jobTraces[j].name + ”
” + “Agent Name: ” + thisAgent.getName() + ”
Agent Hostname: ” + thisAgent.getHostname() + “
Step:
” + stepTraces[s].getName() + ”
Agent Name: ” + thisAgent.getName() + ”
Agent Hostname: ” + thisAgent.getHostname() + “
" + output + "
NO OUTPUT
"); writer.flush(); writer.close();

You can download the beanshell script here.

Life-cycle statuses && environment short-names in Anthill Pro

A nice convention that we’ve enjoyed using is to make the “environment shortnames” in environments — synonymous with the “statuses” in the Life-cycle model of your project in Anthill Pro. This way, upon deployment you “stamp” the build life with the environment shortname that you’re deploying to. This allows you to see what state a given build life is in your development lifecycle. This will account 100% that the build artifacts you deploy are the same exact ones that were deployed to previous ancestor environments (stage, qa1, qa2, uat, sit, dev, etc).

We create a global Library Job which stamps the environment shortname as the last step of all our workflow models. Only upon success of deployment does the buildlife get “promoted” from one environment to the other.

Additionally, the “stamp” we use the build life always show the REVISION_NUM-(trunk|branches/num). Therefore, upon successful “builds” this stamp is applied to the build life, so we know ‘exactly’ where this code located in our repository, and what environment it exists on and when it was deployed there.

We create the stamp by getting the revision number with this “stamping script” that comes pre-baked into the latest version of AnthillPro.

import com.urbancode.vcsdriver3.*;
import com.urbancode.anthill3.domain.buildlife.*;
import com.urbancode.anthill3.runtime.scripting.helpers.*;

int getMaxChangeSet(BuildLife life) {
 int result = 0;
 ChangeLog[] changelogArray = ChangeLogHelper.getChangeLogArray(life);
 for (int i = 0; i < changelogArray.length; i++) {
  ChangeSet[] changesetArray = changelogArray[i].getChangeSetArray();
  for (int j = 0; j < changesetArray.length; j++) {
   ChangeSet changeset = changesetArray[j];
   id = changeset.getId();

   // edit out the "r" character for svn
   if (id.startsWith("r")) {
    id = id.substring(1);
   }
   int num = (new Integer(id.trim())).intValue();
   if (num > result) {
    result = num;
   }
  }
 }
 return result;
}

// If there is no changelog, look up the previous build
// and take the highest number from that (if present, else keep searching).

int highestChangeset = 0;
BuildLife life = BuildLifeLookup.getCurrent();
while(highestChangeset == 0 && life != null) {
 highestChangeset = getMaxChangeSet(life);
 life = life.getPrevBuildLife();
}

stampContext.put("changeset", ""+highestChangeset);
import com.urbancode.vcsdriver3.*;
import com.urbancode.anthill3.domain.buildlife.*;
import com.urbancode.anthill3.runtime.scripting.helpers.*;

int getMaxChangeSet(BuildLife life) {  
 int result = 0;
 ChangeLog[] changelogArray = ChangeLogHelper.getChangeLogArray(life);  
 for (int i = 0; i < changelogArray.length; i++) {    
  ChangeSet[] changesetArray = changelogArray[i].getChangeSetArray();    
  for (int j = 0; j < changesetArray.length; j++) {      
   ChangeSet changeset = changesetArray[j];      
   id = changeset.getId();      // edit out the "r" character for svn      
   if (id.startsWith("r")) {        
    id = id.substring(1);      
   }      
  int num = (new Integer(id.trim())).intValue();      
  if (num > result) {        
   result = num;      
  }    
 }  
}  
return result;
}

// If there is no changelog, look up the previous build
// and take the highest number from that (if present, else keep searching).

int highestChangeset = 0;
BuildLife life = BuildLifeLookup.getCurrent();while(highestChangeset == 0 && life != null) {  
 highestChangeset = getMaxChangeSet(life);  
 life = life.getPrevBuildLife();
}

stampContext.put("changeset", ""+highestChangeset);

This allows the use of this “stamp” code as defined in the build job.

${stampContext:changeset}-${property:svn.source}

This is what provides is with a build stamp that looks like: 82773-trunk or 82881-branches/v_613

In order to capture the svn.source, we create a text input called “svn.source” on the originating workflow as a workflow property with the default of value. This variable is also used when pulling the source from the repository, such that the resource is $SVN_BASE/$svn.source as defined when you edit the sources in the originating buildlife.

This makes the “Main” tab of the build-life screen, extremely useful to managers and upper management because it reflects the real time status of the lifecycle development of any given piece of software. Allowing one to reconcile the amount of development time and money went into the production of X defects / features deployed to production, or any other environment one would be interested in knowing this information about. This method also provides clear credibility for, perhaps auditing purposes that a given buildlife which was deployed to production, was also deployed to all the proper testing and staging environments as well. What a treat.

Performing job actions based on properties set in workflows with Anthill Pro

Sometimes, you may want conditional behavior to occur during deployment based upon user input during the “Run Secondary Process” phase. One can define properties inside the workflow which provides handy form elements that can be used for input to make choices for a given deployment.

In this case, we’re going to use a simple checkbox to determine if we want to perform a database update or not with the deployment. Inside the workflow itself, on the properties tab, we’re going to add a checkbox and call it “database.deploy”. If the checkbox is checked its value will be “true” otherwise, it will be “false”. Essentially, we want to run the database deploy step, in our deployment job when the database.deploy property is set to true. By creating properties like this inside the workflow, the end-user will be presented options with these form elements at deploy time.

Note: It is *not* best practice to have different things happen when deploying a build life to the same environment because it can create a situation that can not be rolled back. However, sometimes based on various situations this is unavoidable.

With that, here’s our example:

return Logic.and(
 StepStatus.allPriorIn(new JobStatusEnum[] { JobStatusEnum.SUCCESS,JobStatusEnum.SUCCESS_WARN }),
 Property.is("database.deploy","true")
);

In this example, we’re seeing the “Property.is()” method, obviously if it is set to true, this pre-condition will trigger. However, it is important to note here, that when the checkbox is *not* checked this step will return as “not needed” which is not the same thing as “success”. Meaning, all steps that are executed after this must have the criteria of “Success or Not Needed” if we want them to run, otherwise, all these subsequent steps will also return “not needed” if their pre-condition is simple “previous success”. If that’s too much of a mouthful, then it will be obvious to you why this breaks, once you try it out. Hopefully we’ll save you some frustration here.

Job steps with agent filter based pre-conditions in Anthill Pro

Previously, we discussed: Creating environment specific pre-conditions for job steps in Anthill Pro. Perhaps, you want to have job step preconditions based on agent filters, and not the environment shortname. If so, here’s an example of how that would go down. Let’s say you have a role in your farm called “weblogic-ejb” and you set an agent variable called “role” as “weblogic-ejb”. If you wanted to execute a job step only on servers of this role, you’d set a pre-condition step script as such:

Criteria myCriteria = new Criteria() {
public boolean matches(Object obj)
 throws Exception {
 return "weblogic-ejb".equalsIgnoreCase(AgentVarHelper.getCurrentAgentVar("role"));
 }
};

return Logic.and(
 StepStatus.priorIn(
  new JobStatusEnum[] { 
   JobStatusEnum.SUCCESS, 
   JobStatusEnum.SUCCESS_WARN,
   JobStatusEnum.NOT_NEEDED  
  }),
  myCriteria
 );

Again, we see here that our Logic.and() method ensures we have our agent filter set AND the previous jobs that were run were successful or not needed, which is just as important.

Creating environment specific pre-conditions for job steps in Anthill Pro

When creating deploy workflows in Anthill Pro, it’s best practice to try and keep the deploying in 1 job if possible and have your environment specific steps inside that job function using prerequisites to those jobs. For example, if you wanted to deploy configuration artifact sets as part of your deploy job and have the proper artifact sets deploy to the chosen environment, you would want to program a pre-condition step for each of those environments, then tag those jobs with these pre-condition steps.

For example, lets say we’re dealing with “dev”, “qa”, “stage” and “prod” artifact sets. We would make 4 pre-condition scripts:

is_dev:

Logic.and(
 JobStatus.allAncestorsIn(
  new JobStatusEnum[]{
   JobStatusEnum.SUCCESS,
   JobStatusEnum.NOT_NEEDED
  }
 ),
 new Criteria() {
  public boolean matches(Object obj) throws Exception {
   return EnvironmentLookup.getCurrent().getShortName().equals("dev");      
  }
 }
);

is_qa:

Logic.and(
 JobStatus.allAncestorsIn(
  new JobStatusEnum[]{
   JobStatusEnum.SUCCESS,
   JobStatusEnum.NOT_NEEDED
  }
 ),
 new Criteria() {
  public boolean matches(Object obj) throws Exception {
   return EnvironmentLookup.getCurrent().getShortName().equals("qa");      
  }
 }
);

is_stage:

Logic.and(
 JobStatus.allAncestorsIn(
  new JobStatusEnum[]{
   JobStatusEnum.SUCCESS,
   JobStatusEnum.NOT_NEEDED
  }
 ),
 new Criteria() {
  public boolean matches(Object obj) throws Exception {
   return EnvironmentLookup.getCurrent().getShortName().equals("stage");      
  }
 }
); 

is_prod:

Logic.and(
 JobStatus.allAncestorsIn(
  new JobStatusEnum[]{
   JobStatusEnum.SUCCESS,
   JobStatusEnum.NOT_NEEDED
  }
 ),
 new Criteria() {
  public boolean matches(Object obj) throws Exception {
   return EnvironmentLookup.getCurrent().getShortName().equals("prod");      
  }
 }
);

Using this method requires that the environment shortname be named exactly “dev”, “qa”, “stage” and “prod”. This is defined in the environments section under the systems tab. It’s also especially useful to make your environment shortnames as statuses in your life cycle model, but that is another topic. Also, take note of the part of the script:

--> JobStatus.allAncestorsIn(new JobStatusEnum[] { JobStatusEnum.SUCCESS, JobStatusEnum.NOT_NEEDED })

This portion of the script ensures that not only are we executing in a specific environment, but all previous jobs executed successfully, or “not needed” meaning, a job in the past may not have executed because it was not needed.