Select Page

Essential Android SDK Emulator Commands for CLI

Some of us are grumpy old folks who really don’t want or need some heavy IDE just to edit some text files. Some of us are folks that just want to debug or QA an Android App but don’t want or need to install a bunch of extra junk just to do it.

For whatever your reason, you’re here because you want to use the Android SDK’s emulator but you don’t want to deal with the IDE. First off, if you need help installing the Android Emulator via the command line, I wrote an article about that already — read that for basic instructions.

Now that you have the emulator installed, how do you do stuff like install your APK or read the “console” output? When the emulator whines that it needs an update and would you please just launch the SDK so it can update, how do you brazenly ignore it and update via CLI? Well, let me show you a few helpful commands.

Please note that all these instructions assume that you are in your ‘android’ folder. That would be where you unpacked the SDK zip and has the “emulator”, “platforms”, “tools”, and a few other directories.

Updating via the CLI

Let’s start easy. Assuming you’re in your android directory:

cd tools
./bin/sdkmanager --update

That’s all there is to it. Please note that Android SDK seems pretty sensitive to what your current directory is, so I am pretty sure being in the “tools” directory is necessary. You can try it from other directories if you wish, but if it doesn’t work, try doing it from tools. Alternatively, you can also set an environment variable to make sure it knows the right path (I want to say it is ANDROID_HOME or ANDROID_SDK, but my personal preference is to just run it from the correct directory).

Installing Your APK

Assuming you have an APK to install, the command line to install it is fairly easy. First, make sure your Android emulator is running (see my previous article). Then, from your android directory, type:

./platform-tools/adb install -r path/file.apk

Of course modifying path/file.apk to whatever file you wish to install. The -r option tells adb to replace an existing version first. In my experience, however, this doesn’t work so your mileage may vary. It may depend on if the application is running at the time or not. Anyway, I usually uninstall the app from the emulated device, but adb has an “uninstall” command you can dig into if you wish.

Accessing Logs

Android emits a ton of logs that aren’t easily visible within the device. The adb command allows you to see them. The way it works, is you ask adb to emit logs that match some kind of pattern and severity level. For instance, again from your android directory and again with the emulator running:

/platform-tools/adb logcat net.epicforce:D ReactNativeJS:D \*:s

So what does this mean? It means show me logs for net.epicforce at DEBUG level, ReactNativeJS at DEBUG level, and everything else is SILENT (not shown).

Please note that just searching for your application’s package doesn’t always cut it; because my application uses React Native, it gets two kinds of log messages. In fact, the majority of the logs I get come from React Native instead of my own application.

You can use V (Verbose), D (Debug), I (Info), W (Warning), E (Error), F (Fatal), and S (Silent) to configure your log filters.

For more information on logcat, here’s the official documentation.

Conclusion

So that should get you started. In particular, adb is a powerful tool and there’s a lot you can do with it. Reading the docs is recommended, but, for what I’m doing these commands cover pretty much everything.

Good luck and have fun!

Making an Awesome Resume Part 1: The Audience

One of the most important things you can do to advance your career is to make a good resume.  As someone who made a really successful career out of getting jobs for myself as a contractor, and as someone who currently works with the hiring process, I’ve got a lot of expertise in this area.  Let me tell you what has worked for me, and what I like to see in a resume, and then I’d love to hear about your own experiences in the comments.

As a forewarning, these tips are geared towards technology jobs.  Your mileage may vary in other fields, but I’ve had friends and colleagues in non-technical fields apply my suggestions to their resumes with great success.  So this information is of value no matter what field you’re in!

This is part one of a … hmm, probably three, maybe four part series on how to make an awesome resume.

Understanding the Process

So for our first article in this series, let’s talk about the audience.  When you write a document, you need to know whom will be reading it and what they want to see.  Writing a resume is no different. This will be old hat to a lot of you, but especially people new to the job market might not know all this, so let’s touch on it.

When you apply for a job, there will usually be between two and four people or groups of people who will see your resume.  In order, they are: the recruiter, HR at the company you wish to work at, technical screeners, and the hiring manager.  Not every job has all four, but at a minimum, HR and the hiring manager are pretty common unless the company is a very small shop.

Let’s talk about each one and what they want to see.

The Recruiter

The recruiter often doesn’t work for the company hiring; their job is to fish for people, do some basic screening, and pass them along to the company actually doing the hiring.  Often jobs are posted by recruiters, so it is common to apply for a job and for the first contact will be with a recruiting company.

Most recruiters simply check off boxes and may not know your field very well.  They may ask you simple questions and ask for your “years of experience” in different technical skills.  “Years of experience” is the most meaningless metric I can think of; does it mean a literal count of how many days you’ve worked with the technology?  Does using a technology a few times a year for multiple years count as several “years of experience”? Who knows; I, personally, really dislike this metric.

Which is why Epic Force uses technical people to screen technical people.  As recruiters, we can have a good conversation with hires and actually talk like tech with people.  Candidates can make up whatever “years of experience” number they wish, but having an actual conversation about the technology will reveal if they’re actually any good.  Look how cleverly I tucked in a sales pitch there!

But I digress; when talking to a recruiter, be prepared to talk about your “years of experience” for each of your skills and have some plausible number for each.  The recruiter is also doing a text search of your resume for the skills he or she is looking for, so laundry listing your skills is important.

HR

If your resume is being submitted directly to a company without a recruiter, it is probably going through the Human Resources department.  HR is overworked and, in some cases, really doesn’t know what job you are going to do. It varies a lot; I have worked with HR departments that know a lot about what their employees do, and I’ve worked with some that don’t know or don’t care.

HR often gets hung up on things like “does this candidate’s education level match the requirements?” rather than more important things like “can this candidate do the job?”  They are sometimes a much cruder filter than the recruiter. Though again, not always.

Like a recruiter, they operate off a check list with varying degrees of effectiveness.  I’ve worked as a hiring manager for companies where HR would let pizza drivers with no education or experience get through the filter for senior level jobs, and I’ve worked with HR departments that are capable of giving pretty technical interviews, so it is definitely a mixed bag there.

A huge part of your resume is making sure you check the boxes so you can get past the recruiter and HR and on to the real interview.

The Technical Screener and Hiring Manager

Once you’ve gotten past the often crude filters of the recruiter and/or HR, your resume will land in the hands of the people you will actually work with on a day to day basis.  These are the people you ultimately want to talk to in order to get the job.

The hiring manager is probably stressed out, short staffed, and probably needed you to start yesterday.  They don’t want to read your resume; it is a chore when they’ve got twenty other things to tend to.

Therefore, you want your resume to stand out without making the hiring manager “work for it”.  You also want to have detailed past job experiences so that, if the hiring manager is interested in you, they can look through your experience and see some evidence that you can do what you say you can do.

The same goes for other members on the team, or those doing a technical screening; not all companies have this step, but many will have you talk to your team or to technical screeners before having a final interview with the manager.

Why This Matters

As you can see, there are roughly two “audience groups” for your resume.  There’s the recruiter/HR who probably don’t know the intimate details of your job and are mostly working off checklists, and who are going through probably hundreds of resumes.  They really want to see a nice, concise set of skills.

Then there is the hiring manager and/or the team you will be working with; this audience is more interested in your work experience and if you can prove that you can do the job you are applying for — at least on paper.

The perfect resume caters to both groups and gives everyone what they need to see in the most efficient way possible.

Your Experience

So that’s it for this first article of the series.  I will be continuing on about how to actually write the resume in the next article.  I’d love to hear your comments; are you someone that has done hiring?  What do you look for?  What has worked for you when it comes to resumes?  Until next time!

How to Use Android SDK Emulator from the Command Line

Sometimes, you just want to run an Android Emulator and you don’t want to have to deal with an IDE or a bunch of GUI stuff. Sometimes, you just want it simple and command line. Epic Force is getting into the mobile development space, and I was going to help test the Android build of a cool application we’re building. Finding documentation of how to do this 100% command line is tricky, and there is lots of old information to sift through as the process has recently changed.

Here’s how to do it, up-to-date as of May 1st, 2019. Of course, your mileage may vary if you stumble across this in the future.

And, if you need help with your Android or iOS projects, let us know. Epic Force has been developing applications for over 20 years and can either provide you with staffing, or help you do your project at any stage!

Download Stuff

First, you need the Android SDK.  This is a Zip you get from google.  Get the CLI version. Go here: https://developer.android.com/studio/index.html

Then scroll past the Android Studio stuff and pick the appropriate item from the “Command Line Tools Only” list.  At the time of this writing, the download for Linux is called “sdk-tools-linux-433796.zip” however that number is sure to change frequently.  These instructions should also work (more or less) for Mac and Windows, though I only tested them in Linux.

Make a directory somewhere called “android”, put the zip in there, and unzip it.  Note that the zip file does NOT have a “root directory” in it, so if you just unzip it, it will dump a bunch of stuff in the current directory.  I hate when zip files do that! Woe be unto you if you already have directories the same name as what is in the zip.

Install Stuff

Go into the ‘tools’ directory of the unzipped folder.  We are going to use sdkmanager to download the system images, platform tools, and platform that we need.  If you made an ‘android’ directory in your home directory and unzipped the SDK into that directory like I did, you’d do this:

cd ~/android/tools
./bin/sdkmanager --install 'system-images;android-28;google_apis;x86_64'
./bin/sdkmanager --install 'platforms;android-28'
./bin/sdkmanager --install platform-tools

The “android-28” portion is key; that is the version of android that will be available to you.  You can list all the options with:

./bin/sdkmanager --list

You will need a system-images and a matching platforms for each version of Android you want to run in the emulator.  It is up to the reader to figure out the mapping of Android image numbers to “friendly” Android operating system versions; in my use case, the latest one was fine.

Create AVD

The AVD (Android Virtual Device) is the image that will be used by the emulator.  You can have as many of these as you need. To create an AVD, while still in the ‘tools’ directory as in the above steps, type:

./bin/avdmanager create avd -n ImageName -k "system-images;android-28;google_apis;x86_64"

The -k portion should match whatever image you downloaded.  It will ask if you want to create a custom hardware profile; the answer is likely “no”.

avdmanager can also delete and edit your images and has reasonably instructive help.  Your images (on Linux, and probably Mac) are stored in ~/.android/avd

Run Emulator

So the previous steps you should only have to do once (or every time you want to set up a new virtual device).  This is what you do when you want to run the emulator.  While still in the “tools” directory, because the emulator is incredibly sensitive to what directory you are in, type:

./emulator @ImageName

Where ImageName was the AVD image you created in the previous step.  It should just work! Please note that the emulator will save your Android’s state however you have it when you close the emulator; personally I find this unhelpful, so I go into the options (See the … on the side bar), pick Snapshots on the left, then the Settings tab, and I turn off Auto-save current state to Quickboot.

If you don’t do this, awkward stuff can happen.  For instance, I turned off the emulator with the little ‘On/Off’ button … and it wouldn’t come back on again no matter what I did.  Then I closed the emulator, and it saved my emulated android in the off state. Nothing I did would turn it on, so I had to re-make my image.  I’m sure there’s a way around this, but good luck finding an answer for it.

But, at least now, you have an answer as to how to run the emulator!

Sharing Filesystem between Linux and Windows on the AWS Cloud

Overview

This technology article is all about something I was interested that I have dealt with that was enjoyable and challenging that I solved Broadridge client crucial issue of sharing filesystem between Linux and Windows on the Amazon AWS

Problem Statement

Broadridge client got the unique requirement for dmEdge project where application running on AWS Cloud Linux instance needs to share filesystem with application running on AWS Cloud Windows instance.

R&D/Proof of Concept (POC)

Amazon EFS is not supported on Windows instances to meet the problem statement.

To resolve the problem and to find out the best possible solution, I have carried out the following proof of concepts:

Sharing the filesystem with AWS EFS + Samba Server (AWS Cloud Linux) + Windows

Sharing the filesystem with ObjectiveFS + Samba (AWS Cloud Linux) + Windows

Sharing the filesystem with Samba (AWS Cloud Linux) + Windows

Etc.…

Conclusion

Upon through study I have found that for web server workloads, ObjectiveFS suite best has higher performance and lower latency compared to Amazon EFS, etc. The key performance statistics between ObjectiveFS and AWS EFS are listed below:

  ObjectiveFS EFS
Reliability Backed by Amazon S3 NFS-based (NFSv4) protocol
Storage Durability 99.999999999% by S3 Not specified
Performance Always high performance Pay for performance
Performance: Small Files 80X faster than EFS (info) Slow for small files
Performance: Large Files 350MB/s (info) 100MB/s
Scalability 1 to 1000s 1 to 1000s
Storage Cost S3: $0.03/GB. S3 Pricing EFS: $0.30/GB in US East (N. Virginia). EFS Pricing
Security End-to-end encryption Data at rest is encrypted. Data in transit encryption is in preview.
Availability Supports all regions Currently only in 6 regions (Northern Virginia, Ohio, Oregon, Northern California)
Accessibility Access from anywhere Limited to same region as EC2 instances using it
Product Maturity In production since 2013 Released in July 2016
OS Supported Linux, OS X, Windows via Samba/NFS Linux only. Using Amazon EFS with Microsoft Windows Amazon EC2 instances is not supported.
Backup and Disaster Recovery Secure storage such as S3 and on-premise S3-compatible object stores.
Can use the built-in Snapshot feature to recover point-in-time snapshots of your data  Can use S3 Cross-Region-Replication
Custom coded EFS-to-EFS Backup solution 
     
Features    
Snapshots Automatic & Checkpoint
Data integrity Strong checksums
Cross-region access Yes
Local disk cache Yes
Compression Yes
Transfer Acceleration Yes
AWS IAM support Yes Yes
User/Group ID mapping Yes Yes
AWS KMS support Yes Yes
Client-side encryption Yes
Server-side encryption Yes Yes

The following pages covers the steps to share your ObjectiveFS filesystem from Linux to Windows via Samba.

Download/Install ObjectiveFS and Export ObjectiveFS to Windows via Samba

Launch EC2 Linux instance by following the Broadridge guidelines.

Note: Choose Broadridge approved hardened Image

  1. Connect to your Amazon EC2 instance.
  2. Alleviate your user privilege.

sudo su

  1. Update the EC2 instance.

yum update -y

  1. After you’ve connected, install ObjectiveFS with the following command.

Note: ObjectiveFS is a licensed product, need to have an account & license.

$ curl -O https://objectivefs.com/user/download/acpbuxv5r/objectivefs-5.4-1.x86_64.rpm

$ yum install objectivefs-5.4-1.x86_64.rpm

  1. Verify NTP has a small offset (<1 sec):

$ /usr/sbin/ntpdate -q pool.ntp.org

  1. Configure your credentials. If using keys, get your S3 Keys.

Note: Create user name ‘logger’ when getting S3 keys.

$ sudo mount.objectivefs config

Enter ObjectiveFS license: <your objectivefs license>

Enter Access Key Id: <your AWS or GCS access key>

Enter Secret Access Key: <your AWS or GCS secret key>

Enter Default Region (optional): <S3 or GCS region>

If using IAM roles

$ sudo mount.objectivefs config -i

Enter ObjectiveFS license: <your objectivefs license>

Enter Metadata Host [169.254.169.254]: <your metadata host ip>

Enter Default Region (optional): <S3 or GCS region>

  1. Create a file system:

For your filesystem name, use a globally unique, non-secret name (i.e. a new bucket not used by others) and ObjectiveFS will create a new bucket with that name for your filesystem.

Choose a strong passphrase, write it down and store it somewhere safe.

IMPORTANT: Without the passphrase, there is no way to recover any files

Default region: The default region entered in step 2 (if not specified, us-west-2 for AWS)

$ sudo mount.objectivefs create <your filesystem name>

Passphrase: <your passphrase>

Verify passphrase: <your passphrase>

To specify your filesystem region:

$ sudo mount.objectivefs create -l <your region> <your filesystem name>

Passphrase: <your passphrase>

Verify passphrase: <your passphrase>

  1. Mount the file system

You need an existing empty directory to mount your file system, e.g. ofs. Process will run in the background.

$ sudo mkdir /ofs

$ sudo mount.objectivefs <your filesystem name> /ofs

Passphrase: <your passphrase>

Alternatively, you can mount a filesystem called ofs with S3 Transfer Acceleration enabled for a faster files transfer.

$ sudo mkdir /ofs

$ sudo AWS_TRANSFER_ACCELERATION=1 mount.objectivefs <your filesystem name> /ofs

Passphrase: <your passphrase>

  1. Install Samba

CentOS command as follows:

$ sudo yum install samba

  1. Use nano (for example nano /etc/samba/smb.conf) and paste the following at the end.

[ofs]

path = /ofs

valid users = logger

read only = no

guest ok = yes

writable = yes

browseable = yes

vfs objects = acl_xattr

acl_xattr:ignore system acls = yes

nt acl support = yes

create mask = 0700

directory mask = 0700

force user = logger

Note: ‘force user’ parameter will allow windows user ‘logger’ to write data into the windows share.

Or otherwise, if you want to reach those shares from any machine on your network, paste the following at the end. Since we set that share as anonymous, users won’t have to log in to access the files and folders within.

[Anonymous]

path = /ofs

browsable = yes

writable = yes

read only = no

Or otherwise, create Windows EC2 instances root userid/password and configure it in samba to connect.

Or otherwise, create local user account on Windows EC2 instances with samba account userid/password to connect samba share.

  1. Save and start your samba by service smb start. Just to make sure you have set the configuration file right testparm can help to validate it.

$ testparm

  1. Exit out and create a user – logger in this case. Note: This userid/password is needed for windows login connect to samba share.

useradd logger

passwd logger

  1. Create the same password in smb:

smbpasswd -a logger

Mapping ObjectiveFS (S3) to Windows

SMB is ready, let’s move to Windows environment and map this share to some drive latter as “Add a network location”.

For the share availability we just need to open 2049 port (EC2 -> Security Group) and it is pre-defined in the list of protocols (just pick it and define your IP, CIDR or another Security Group).

Make sure you have setup your security group accordingly (445 and 139 ports should be open between source and target). Check further from other instances and see if this concept works.

Please ensure following ports are opened as shown below:

Test Samba Share (This is to simulate InDesign on Windows Instance)

Launch EC2 Microsoft Windows Server instance by following the Broadridge guidelines.

Note: Choose Broadridge approved hardened Image

To connect to your Amazon EC2 instance and test samba share

  1. Connect to your Amazon EC2 instance.
  2. After you’ve connected, open the “Start–>Run”. Enter IP address of samba server with backward slash. Refer below screens for more understanding.

Note: When prompt for user Id/password, use logger/logger that is created in earlier step

Mount a shared folder in Linux (This is to simulate Customization Engine on Linux instance)

Launch EC2 Linux instance by following the Broadridge guidelines.

Note: Choose Broadridge approved hardened Image

  1. Connect to your Linux instance as ec2-user using SSH.
  2. Alleviate your user privilege.

sudo su

  1. Update the EC2 instance.

yum update -y

  1. Create a local folder.

$mkdir test_dir

  1. Mount the share using the following command

$ mount -t cifs //Windows_IP/share_name  target_folder_path -o username=user,password=pwd

To map a network drive locally

  1. Open a command prompt on the windows machine and run the following command:

net use devicename: \\computername\sharename /USER:domainname\userid password /PERSISTENT:NO

net use Execute the net use command alone to show detailed information about currently mapped drives and devices.
devicename Use this option to specify the drive letter or printer port you want to map the network resource to. For a shared folder on the network, specify a drive letter from D: through Z:, and for a shared printer, LPT1: through LPT3:. Use * instead of specifying device name to automatically assign the next available drive letter, starting with Z: and moving backward, for a mapped drive.
\\computername\sharename This specifies the name of the computer, computername, and the shared resource, sharename, like a shared folder or a shared printer connected to computername. If there are spaces anywhere here, be sure to put the entire path, slashes included, in quotes.
username Use this option with /user to specify the username to use to connect to the shared resource.
password This is the password needed to access the shared resource on computername. You can choose to enter the password during the execution of the net use command by typing * instead of the actual password.
domain name Specify a different domain than the one you’re on, assuming you’re on one, with this option. Skip domain name if you’re not on a domain or you want net use to use the one you’re already on.

Recommendations

  1. Enable disk cache when local SSD or hard drive is available. For EC2 instances, recommend using the local SSD instance store instead of EBS because EBS volumes may run into ops limit depending on the volume size.
  2. Use i3.xLarge EC2 Instance Type for Production and General Purpose EC2 Instance Type for non-production environment.
  3. To ensure failover capabilities, consider assign a secondary private IP address to the primary ENI that can be moved to a failover instance. In the event of an instance failover you can move the secondary private IPv4 address to a standby instance.

Setup file system Passphrase from an AWS Parameter Store

  1. Make sure AWS Systems Manager (SSM) is allowed in the IAM role attached to your EC2 instance to access AWS parameter store.
  2. Verify AWS CLI is a newer version with support for SSM get-parameter.
  3. Add your passphrase as a secure string to AWS parameter store.

#aws –region=<your s3 region> ssm put-parameter –name ‘OBJECTIVEFS_PASSPHRASE’ –value ‘<your passphrase>’ –type SecureString

  1. Create an executable file (e.g. /usr/sbin/get_aws_ssm) with the file content as follows. This script will call AWS parameter store and will only print out your passphrase to return to ObjectiveFS.

#!/bin/sh

aws –region <your s3 region> ssm get-parameter –name ‘OBJECTIVEFS_PASSPHRASE’ –with-decryption | sed -n ‘/Value/s/.*: “\(.*\)”,/\1/p’

  1. In ‘/etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE’, specify the path to the executable file as the file content.

            #!/usr/sbin/get_aws_ssm

Open Source Anthill Pro to Jenkins Migration Plugin Tool

The Anthill Pro to Jenkins Migration Tool uses the Anthill Pro Remoting API to process Anthill Originating Workflows and convert them into Jenkins Pipeline jobs. The produced Jenkins jobs have parameters based on the Anthill Pro properties used by the jobs, and are neatly formatted with comments explaining all the decisions made by the plugin during the migration process.

This Migration can run in batches, allowing you to queue up a set of Anthill Pro jobs that you want to migrate and then performing the migration in parallel with a thread pool. This is handy in particular for cases where your Anthill Pro server and destination Jenkins server are in separate datacenters, when testing over the wire, or when migrating hundreds or thousands of workflows.

Key features:

  • Simple interface; pick your Anthill Pro project and workflows, and go! There is not much user input required to do a migration.
  • Very clean, commented, easy-to-read generated pipeline scripts that map one-to-one with your Anthill Pro workflow job configurations.
  • Migration of properties, including secure properties, from Anthill Pro to Jenkins parameters.
  • The plugin enables Jenkins Pipeline scripts to process inline beanshell scripts which are commonly used in AHP workflows (properties of the format ${bsh:}}).
  • Also, the plugin enables Jenkins Pipeline scripts to have parameters that refer to other parameters. This is necessary because Anthill Pro properties often reference each-other which is not supported by Pipeline out of the box.
  • Use of the Jenkins credential store for migrated credentials (such as source repository credentials).
  • Very easy to test produced pipeline jobs, with clear error messages if there’s a problem with the migration.

Key planned features of the migration plugin include:

  • Support for more Anthill steps (there’s a pretty limited set implemented at the moment)
  • Support for preconditions and other Anthill Pro scripting mechanisms that are difficult to deal with by hand.
  • The ability to just export properties without creating Jenkins jobs in case you would rather use your own Pipeline template. Use our generated Pipeline scripts as a base to make your own template and we’ll make sure you didn’t miss anything.

BUILDING

In order to build this project, you must first have a net.epicforce.migrate.ahp in your packages. This can be done (at the moment) by doing a ‘mvn clean install’ to make a local cache of the AHP library. It will automatically bake in all the IBM UrbanCode stuff needed to run the migration.

You can get the library here: https://github.com/epicforce/AnthillProMigratorLibrary

PLEASE NOTE: Different versions of AHP have different remoting API’s. They are NOT compatible with each-other. Therefore, you must build this plugin for whatever version of AHP you anticipate interfacing with. You may get errors about serial numbers not matching if you try to use mis-matched remoting API’s.

To build your plugin (.hpi) file, you can do a typical:

mvn clean package

Source code available here:

https://github.com/epicforce/AnthillProToJenkinsMigrator