Select Page

How to deploy 1 file with Anthill Pro

The reason I am writing this post is because I noticed people typing this search into google, and our blog coming up. I like to look at those searches to ensure we are giving people the content they are looking for.

Therefore, if someone is searching for something like that, that tells me that we’re likely dealing with a person that has a demo copy of Anthill Pro and simply wants to know how to use it. Understandably, Anthill Pro can appear complicated.

If you wanted to deploy 1 file to some agent just to see how Anthill works, this is what you’d need to do. Please bear in-mind, this is complex and not meant to be simple, however once you understand more about how anthill works, you will understand why.

  • Install and start the anthill pro server
  • Install an agent either on the server or on some other computer. The installer for the agent and server are in the same bundle usually.
  • When configuring the agent, upon install make sure to put the hostname of the server that has the anthill server software running on it.
  • Be sure to start the agent. It does not start by default.
  • As admin go: System -> Agents -> Available — you should see the agent you just installed on the list. Click it, and choose an environment for it. If you want to have this agent be in another environment then what is listed, you will need to add one at System -> Environments.
Now that you have an agent configured and setup, you can create a workflow to perform your deployment (File copy).
Generally, you copy files from Anthill to a target host running an agent. Before you do that, you have to store the files inside Anthill’s codestation repository. This is the difference between “Artifact Deliver” and “Resolve My Artifacts”. Artifact deliver collects files and stores them in Anthill, while Resolving, refers to copying from Anthill to the host. This can be confusing, but get used to it. Artifact deliver steps, generally occur in a build. And resolving artifacts, generally occur in a deployment.
To make this simple, we’ll create a codestation project, put some files in it, then deploy it to 1 host. A codestation project in Anthill is a special project designed to hold binary artifacts. This is great to store things like apache, jboss, jdbc drivers, basically all your components which you can deploy in any way you see fit.
  • Go to: Administration ->
  • Create a folder
  • Create a Codestation project in the folder
  • Give it a name, choose Example Life-Cycle Model, click Done.
  • Click the codestation project you just made.
  • Click “new” to make a new Build Life
  • For “Stamp” put 1, this will be version 1 of your component.
  • Click save
  • Click upload file
  • Choose “APP” for artifact set.
  • Click choose file, upload whatever file you want. If you upload many files, that will take a zip file you upload and unzip it in codestation.
  • If you enter something into directory, anthill will prefix a path to all the file(s) you’ve uploaded.
Now that you have a file in your codestation repository:
  • Inside your folder: create a project (non-life-cycle based)
  • Put it in the default environment group.
  • Create a job in the project.
    • Create a step
    • Artifacts -> Resolve another project’s artifacts.
    • Choose your codestation project.
    • Click set
    • Choose “APP” artifact set
    • In the stamp put 1 because we want version 1
    • Click save, click the project name in the breadcrumb header.
  • In the project, create a new workflow.
  • Check off all environments, click save.
  • In the workflow, click “definition”.
    • Select “embeded definition”
    • Click start
    • Insert after
    • Choose your job
    • Choose “All ancestor jobs success” for the Pre-Condition
    • Choose “Fixed Agent Selection” for Agent filter and choose the agent you previously installed and click the plus.
    • Choose “Default” for working directory.
    • Click insert job.
  • Go to: System -> Work Dir Scripts -> Default
    • Change the path to the path you want your file to be deployed on your server.
    • Click save
  • Click Dashboard
  • Click the workflow you created
    • Choose your environment, click run.
That’s how you copy 1 file in Anthill Pro. Having this experience will reward you with enough information to maybe do something useful. Add jobs, split them, add agents, environment, go nuts.

How to clean an oracle schema without administrative or dba permissions

Working in an environment where you do not have full administrative control over your oracle instances, may leave you wanting to clean your schema. The easiest way to do this is to drop and create the schema. When that is not an option, you will need to delete all the user objects yourself. After having to do this over and over again, I’ve put together a script that will allow you to clean your schema without having full permissions.

declare
l_owner varchar2(30):='SCHEMA_NAME';
begin
for x in (select object_name,object_type from dba_objects where owner=l_owner) loop
                if x.object_type='TABLE' then
                               execute immediate 'drop table ' || l_owner|| '.' || x.object_name || ' cascade constraints purge';
                elsif x.object_type not in ('INDEX','PACKAGE BODY','TRIGGER','LOB', 'TABLE PARTITION', 'TABLE SUBPARTITION', 'INDEX PARTITION', 'INDEX SUBPARTITION') then
                               execute immediate 'drop ' || x.object_type || ' ' || l_owner || '.' || x.object_name;
                end if;
end loop;
end;
/

Life-cycle statuses && environment short-names in Anthill Pro

A nice convention that we’ve enjoyed using is to make the “environment shortnames” in environments — synonymous with the “statuses” in the Life-cycle model of your project in Anthill Pro. This way, upon deployment you “stamp” the build life with the environment shortname that you’re deploying to. This allows you to see what state a given build life is in your development lifecycle. This will account 100% that the build artifacts you deploy are the same exact ones that were deployed to previous ancestor environments (stage, qa1, qa2, uat, sit, dev, etc).

We create a global Library Job which stamps the environment shortname as the last step of all our workflow models. Only upon success of deployment does the buildlife get “promoted” from one environment to the other.

Additionally, the “stamp” we use the build life always show the REVISION_NUM-(trunk|branches/num). Therefore, upon successful “builds” this stamp is applied to the build life, so we know ‘exactly’ where this code located in our repository, and what environment it exists on and when it was deployed there.

We create the stamp by getting the revision number with this “stamping script” that comes pre-baked into the latest version of AnthillPro.

import com.urbancode.vcsdriver3.*;
import com.urbancode.anthill3.domain.buildlife.*;
import com.urbancode.anthill3.runtime.scripting.helpers.*;

int getMaxChangeSet(BuildLife life) {
 int result = 0;
 ChangeLog[] changelogArray = ChangeLogHelper.getChangeLogArray(life);
 for (int i = 0; i < changelogArray.length; i++) {
  ChangeSet[] changesetArray = changelogArray[i].getChangeSetArray();
  for (int j = 0; j < changesetArray.length; j++) {
   ChangeSet changeset = changesetArray[j];
   id = changeset.getId();

   // edit out the "r" character for svn
   if (id.startsWith("r")) {
    id = id.substring(1);
   }
   int num = (new Integer(id.trim())).intValue();
   if (num > result) {
    result = num;
   }
  }
 }
 return result;
}

// If there is no changelog, look up the previous build
// and take the highest number from that (if present, else keep searching).

int highestChangeset = 0;
BuildLife life = BuildLifeLookup.getCurrent();
while(highestChangeset == 0 && life != null) {
 highestChangeset = getMaxChangeSet(life);
 life = life.getPrevBuildLife();
}

stampContext.put("changeset", ""+highestChangeset);
import com.urbancode.vcsdriver3.*;
import com.urbancode.anthill3.domain.buildlife.*;
import com.urbancode.anthill3.runtime.scripting.helpers.*;

int getMaxChangeSet(BuildLife life) {  
 int result = 0;
 ChangeLog[] changelogArray = ChangeLogHelper.getChangeLogArray(life);  
 for (int i = 0; i < changelogArray.length; i++) {    
  ChangeSet[] changesetArray = changelogArray[i].getChangeSetArray();    
  for (int j = 0; j < changesetArray.length; j++) {      
   ChangeSet changeset = changesetArray[j];      
   id = changeset.getId();      // edit out the "r" character for svn      
   if (id.startsWith("r")) {        
    id = id.substring(1);      
   }      
  int num = (new Integer(id.trim())).intValue();      
  if (num > result) {        
   result = num;      
  }    
 }  
}  
return result;
}

// If there is no changelog, look up the previous build
// and take the highest number from that (if present, else keep searching).

int highestChangeset = 0;
BuildLife life = BuildLifeLookup.getCurrent();while(highestChangeset == 0 && life != null) {  
 highestChangeset = getMaxChangeSet(life);  
 life = life.getPrevBuildLife();
}

stampContext.put("changeset", ""+highestChangeset);

This allows the use of this “stamp” code as defined in the build job.

${stampContext:changeset}-${property:svn.source}

This is what provides is with a build stamp that looks like: 82773-trunk or 82881-branches/v_613

In order to capture the svn.source, we create a text input called “svn.source” on the originating workflow as a workflow property with the default of value. This variable is also used when pulling the source from the repository, such that the resource is $SVN_BASE/$svn.source as defined when you edit the sources in the originating buildlife.

This makes the “Main” tab of the build-life screen, extremely useful to managers and upper management because it reflects the real time status of the lifecycle development of any given piece of software. Allowing one to reconcile the amount of development time and money went into the production of X defects / features deployed to production, or any other environment one would be interested in knowing this information about. This method also provides clear credibility for, perhaps auditing purposes that a given buildlife which was deployed to production, was also deployed to all the proper testing and staging environments as well. What a treat.

Storing files in the Amazon S3 cloud with your application

Amazon S3 storage is one of the best ways to backup data or store files for your web application in a secure reliable way. One of the greatest benefits of using S3 is that it’s a simple way to not only store large files, but a great way to provide a very fast downloading experience for your end users. In order for you to use Amazon S3 storage you will need to create an account and set yourself up, more information about S3 can be found on the official site.

We’ve had big issues with finding a good way to interact with the APIs that were out there to put large files up in S3, so we wrote our own in C. The usecase here was that we wanted an easy and reliable way to send large files (100M – 2G) to S3 and know for sure if it made it there or not. Our program forks() itself and has an extra optional argument you can pass to it which serves as a callback URL. The tool is called S3Ninja and is available for download at http://s3ninja.wordpress.com/

Example usage:

s3ninja [access_key_id] [secret_access_key] command [arguments]

Performing job actions based on properties set in workflows with Anthill Pro

Sometimes, you may want conditional behavior to occur during deployment based upon user input during the “Run Secondary Process” phase. One can define properties inside the workflow which provides handy form elements that can be used for input to make choices for a given deployment.

In this case, we’re going to use a simple checkbox to determine if we want to perform a database update or not with the deployment. Inside the workflow itself, on the properties tab, we’re going to add a checkbox and call it “database.deploy”. If the checkbox is checked its value will be “true” otherwise, it will be “false”. Essentially, we want to run the database deploy step, in our deployment job when the database.deploy property is set to true. By creating properties like this inside the workflow, the end-user will be presented options with these form elements at deploy time.

Note: It is *not* best practice to have different things happen when deploying a build life to the same environment because it can create a situation that can not be rolled back. However, sometimes based on various situations this is unavoidable.

With that, here’s our example:

return Logic.and(
 StepStatus.allPriorIn(new JobStatusEnum[] { JobStatusEnum.SUCCESS,JobStatusEnum.SUCCESS_WARN }),
 Property.is("database.deploy","true")
);

In this example, we’re seeing the “Property.is()” method, obviously if it is set to true, this pre-condition will trigger. However, it is important to note here, that when the checkbox is *not* checked this step will return as “not needed” which is not the same thing as “success”. Meaning, all steps that are executed after this must have the criteria of “Success or Not Needed” if we want them to run, otherwise, all these subsequent steps will also return “not needed” if their pre-condition is simple “previous success”. If that’s too much of a mouthful, then it will be obvious to you why this breaks, once you try it out. Hopefully we’ll save you some frustration here.