Skip navigation

Category Archives: apex

In Winter 12, Salesforce very quietly, and without any fanfare, introduced a capability to Apex that has been something of a holy grail to insane Force.com geeks like myself: the beginnings of support for Reflection in Apex. Which intrepid explorer blazed the trail that led to this prodigious capability? For those of you who keeping score, our silent hero’s name is not Indiana, but Jason. Or, more precisely, JSON.

This is a bit of a historical post, meant to pave the way for a more radical post coming later this week (stay tuned!). It describes events / features that have occurred / been introduced over the course of the past year, but whose full significance is only now being discovered by many Force.com developers. Understanding these developments will be a crucial preliminary to reading my upcoming post on an extension of Tony Scott’s trigger pattern to allow for scalable, extensible triggers usable by managed package developers / ISV’s (there, I let the cat out of the bag!)

Ancient History: Native JSON Support lands in Apex

In Winter 12, Salesforce added native support for JSON to Apex, meaning that you can, in just 2 lines of code convert (nearly) any Apex class/object into a JSON-formatted String, and then convert it back from a String into the proper Apex class/object. For instance:


Map<String,String> params = new Map<String,String>{
    'foo' => 'bar',
    'hello' => 'world'
};
// Serialize into a JSON string
String s = JSON.serialize(params);
// --> yields '{"foo":"bar","hello":"world"}

// Deserialize back into an Apex object

// Define the Type of Apex object that we want to deserialize into
System.Type t = Map<String,String>.class;

params = (Map<String,String>) JSON.deserialize(s,t)

The addition of support for JSON saved ISV’s the immense hassle (and extremely-limiting obstacle) of implementing their own JSON serialization classes, which could very quickly consume all of the 200,000 Script Statements Governor Limit. So needless to say, this alone made native JSON support a godsend to Apex developers.

The Tip of the Iceberg: System.Type and Apex Interfaces

But notice that little System.Type class that we instantiated prior to deserialization. As the Apex development team was building in JSON support, they laid the foundation for something even bigger – an Apex way to achieve what’s known as “Reflection”, or dynamic instantiation/execution of classes/methods based on name. For developers looking for a parallel in Force.com, this is similar to creating a new Sobject by getting a Schema.SObjectType token from the Global Describe Map using an arbitrary String (e.g. ‘Account’) as the map search key, and then calling the newSObject() method. In this scenario, you could have received the dynamic key (‘Account’) from anywhere — from a text field in a custom object in your database, from a JavaScript prompt in the client, etc., and run totally dynamic logic (the instantiation of an arbitrary SObject type) based on that String value.

How does this Schema example connect to Trigger Patterns? Well, the reason that the dynamic SObject instantiation from Schema data works is that each Custom and Standard Object in your database is an instance of an interface called “SObject”. An interface is a definition of methods that all implementations of that interface (such as Account, Contact, MyCustomObject__c) must support. This is why can cast any Account, Contact, or custom object record into an SObject and then get access to some of the dynamic methods that the SObject class supports, such as getField(String fieldName) or getSObject() — because each Account, Contact, or CustomObject__c record is not just an instance of an Account, Contact, or CustomObject__c — it’s also an implementation of the SObject interface, which defines the methods getField(), getSObject(), etc.

With native JSON in Apex, you can now take this Reflection business a huge step further, thanks again to the magic of interfaces. Before we tackle custom interfaces, let’s consider an important standard Apex interface: Schedulable. The Schedulable interface looks like this:


global interface Schedulable {
    global void execute(SchedulableContext ctx);
}

The power of Apex classes that implement Schedulable is that they can be scheduled for execution at a later date/time using the System.schedule() method, like so:


// Get an instance of MyScheduledClass,
// which implements Schedulable
Schedulable c = (Schedulable) new MyScheduledClass();

// Schedule MyScheduledClass to be run later
System.schedule('CleanUpBadLeads','0 0 8 10 2 ?',c);

Now, for those of you who were excited about Skoodat Relax, you may have just realized how it all works, and you are absolutely right! One of the features of Relax is its ability to store the name of a Scheduled Apex Class, and the CRON schedule that dictates when it should be run, in a custom Object called Job__c in Salesforce, and then dynamically activate, deactivate, and run the desired classes at the desired times. And the not-so-secret secret (Relax is on GitHub) behind it is: it all works because of interfaces… and because of native JSON support.

Taking this a step further, as Relax does, we can define complex custom interfaces, with whatever methods we’d like, and then, when our Apex code is run, we can call these methods on whatever implementation of our interface we are handed:


// Our interface definition
public interface Runnable {
   public void run();
}
// A sample implementation
public class DestroyBadApples implements Runnable {
   public override void run() {
       delete [select Id from Apple__c where CreatedDate <= :Date.today().addYears(-365)];
   }
}

// A sample use of this interface
public class RunStuff {
   public RunStuff() {
      // Define the stuff we'd like to run
      List<Runnable> stuffToRun = new List<Runnable>();
      stuffToRun.add(new DestroyBadApples());

      // Run the stuff
      for (Runnable r : stuffToRun) r.run();
   }
}

Part 3, in which we catch the scent of our prey: Dynamic Class Instantiation with JSON

With Apex support for interfaces, developers already had, prior to Winter 12, one of the two key tools they needed that would make possible the Apex Holy Grail of dynamic method calls / code execution. The missing tool was the ability to create an implementation of an interface without knowing the class’ name beforehand. In our previous examples, notice how we had to hardcode the name of the implementations, e.g. MyScheduledClass and DestroyBadApples.

With Winter 12, our humble hero JSON gave us that missing tool. How so? Well, the JSON.deserialize(string,type) method allows us to deserialize any String into any supported Apex Type, as defined by Apex Classes which have a corresponding representation as a System.Type, a little-noted System class that crept in with Winter 12. The key tricks that turned System.Type into a Holy Grail as early as Winter 12 were

  1. A little trick involving an empty JSON String, e.g. “{}”
  2. System.Type.forName(name)

Combining these two tricks with interfaces, and voila! We have Dynamic Class Instantiation in Apex:


// A sample use of this interface
public class JobRunner {
   public static void ScheduleJobs() {
      // Retrieve the jobs we'd like to schedule from a custom object
      for (Job__c j : [select JobNumber__c, Cron__c, ApexClass__c from Job__c]) {
          try {
              // Get the Type 'token' for the Class
              System.Type t = System.Type.forName(j.ApexClass__c);
              if (t != null) {
                 // Cast this class in to a schedulable interface
                 // (if the cast fails, we'll get an error)
                 Schedulable s = (Schedulable) JSON.deserialize('{}',t);
                 // Schedule our Schedulable using the stored CRON schedule 
                 System.schedule('JobToRun'+JobNumber__c,Cron__c,s);
              }
          } catch (Exception ex) {}
      }
   }
}

Part the 4th, in which our hero’s achievements are acknowledged but his powers not increased

In Summer 12, Salesforce “officially” acknowledged its intentions with System.Type and added a newInstance() instance method allowing for less hackish dynamic class instantiation by name. However, the JSON method remains far more powerful, as it allows us to dynamically populate all manner of fields/properties on our dynamically-instantiated objects — using industry-standard JSON syntax, allowing this dynamic population to be initiated client-side and completed server-side in Apex! If your mind is not spinning with all of the possibilities right now, I’m sure it either did so already during the past 12 months, or will start doing so very soon, particularly when we talk about applying this capability to Triggers (more propaganda for the next post!)

So where can Salesforce go from here? Well, there is still no direct way to dynamically execute a particular method on an Apex class without hard-coding the method’s name. Support for this would take Reflection in Apex to a whole new level. For now, though, well-crafted Interfaces can usually get around this limitation. The real meat of the work was done early last year when Salesforce R&D was “snowed over” in development for Winter 12, and for the fruit of those 4 months, the Force.com Development community should be immensely thankful.

Advertisements

Of all the killer Summer 12 improvements to the Force.com Platform, the one we at Skoodat were most excited about was Post-Install and Uninstall Apex scripts. In a nutshell, they allow developers of Force.com Managed Packages (more posts coming on what these are and how to use them!) to execute arbitrary Apex Code every time that a User installs or uninstalls a version of the developer’s package. This feature is directly targeted at developers’ needs to perform various “setup” and “tear-down” tasks to help prepare customers’ orgs to use their new packages.

What sorts of tasks fall under this category? Here are a few of the most common:

Setup / On-Install Tasks

  • Create/update records of “configuration” objects
  • Create/update records of List Custom Settings or Organization-Default Hierarchy Custom Settings
  • Create sample data (for Trials)
  • Send “Welcome” Emails to the User who installed the package
  • Notify an external system (using a callout or email service)
  • Kick off a batch process (for initializing fields, modifying fields to fit new API’s, etc.)

Teardown / On-Uninstall Tasks

  • Delete sample data
  • Send “Thanks for using our Package” Emails to the User who uninstalls the package
  • Notify an external system (using a callout or email service)

Prior to this feature, developers would use various “hack” strategies for achieving this sort of behavior, including:

  • Setup Page
    • Strategy: The default tab in your app is a “setup” page that runs needed routines either on page load or asynchronously in response to user button clicks
    • Problems:
      • Users may not go to this page
      • DML is not permitted in page constructors, forcing asynchronous post-load behavior
      • Button click behavior often requires lots of custom JavaScript Remoting calls or VF rerenders
      • Wastes users’ time getting to the real meat of your app
  • Runtime Detection
    • Strategy: When custom triggers / VF pages in your app are run, your app looks to see if needed setup tasks have been performed, and runs setup routines as appropriate
    • Problems:
      • Makes your app run slower — this logic has to be run every time your trigger / page is run
      • DML is not permitted in VF page constructors, so any insertions of Custom Setting or custom configuration object records will have to occur post-load / asynchronously, which can really throw off page functionality

With the problems of the hack approaches in mind, Apex Install Scripts appeared to be a total godsend! And, largely, they are indeed.

So how does one use this tool? Well, the first step is to have a Managed Package. Once you have a Managed Package, you need to create a new class that implements the Install Handler interface. The docs have a basic example of how to do this. Then, edit your package:

You will then see two fields, one for specifying a Post Install Script, and another for specifying a script to run when your package is Uninstalled by a User. In the lookup, you will be prompted to select a class which implements InstallHandler. Be sure your class has COMPILED before trying this — otherwise it may not show up in the list of available classes.

Now, release a Managed – Beta version of your package (so that your InstallHandler  implementation class does not get locked down in to your package until you are ready). Once you have the install link, install your package into a Sandbox or Partner DE or Test Org (the only places where Managed – Beta packages can be installed). As soon as you are finished with the 2-3 step install instructions, your Post Install Script will be run. To run your Uninstall Script, uninstall this package.

Pretty cool stuff!

The Phantom of the Install Opera

As we at Skoodat dived  in to using Install Scripts, we encountered some mysterious, strange, sometimes phantom behaviors, which aroused some pretty basic questions in our minds as to the nature and functionality of Install Scripts — none of which are discussed in the Install Handler interface documentation:

  • Who do Install Scripts execute as?
    • What User?
    • What Profile?
    • Based on Source (DE) Org, or Destination Org?
  • What code/resources in the Destination Org do Install Scripts have access to?
    • Global methods in Apex Classes in Managed Apps that the Install Script’s app extends?
    • What packages is
  • Which Namespaces / Managed Apps do Install Scripts have access to?
    • All apps? Just the installer’s app? Or any Site Licensed apps?

In the absence of documentation, the only way to find out was to either ask questions of the SFDC community, or find out through experimentation. I did both, and here are some things I discovered:

  • Install Scripts execute as a totally unique User and Profile
    • Neither the User nor Profile exist in either Source or Destination orgs.
    • This User / Profile has essentially ‘God’ / System privileges
    • This User / Profile can create / modify records of Standard objects as well as Custom Objects / Custom Settings that come with the package being installed.
  • Do NOT annotate your Install Script, or any classes it calls (such as Batch/Scheduled Apex) as “with sharing”
    • YES: public class MyInstallScript implements InstallHandler
    • YES: public without sharing class MyInstallScript implements InstallHandler
    • NO: public with sharing class MyInstallScript implements InstallHandler
  • DML operations can fail if they are not initiated from the class which implements the InstallHandler interface
    • A DML operation executed from a helper class will fail due to SObjectExceptions such as “Field Name is not accessible”
    • *Simply extracting this code back in to the class which implements InstallHandler can avoid this issue*
  • Describe / Schema / Permissions info is WRONG / MISLEADING from the InstallScript context
    • Schema permissions (e.g. Object or Field Accessibility) always returns FALSE
      • Example: Calls such as Contact.Name.getDescribe().isAccessible() will universally return FALSE regardless of Profile permissions in either Source or Destination org
    • Profile / PermissionSet data is WRONG
      • Example: PermissionsModifyAllData returns FALSE on the Install Script User’s Profile / PermissionSet, but the InstallScript CAN modify all data!
    • Consequences: Do NOT try to dynamically determine whether to permit DML based on permissions from within Install Scripts. You will get very frustrated.
    • *Timeline for Resolution*: Summer 13 release (Safe Harbor — got this from SFDC support)
      • UPDATE 2/25/2013 — looks like this was NOT resolved in Spring 13, so I’ve changed to Summer 13 😦

If you’re wondering how I verified this information, here’s a Post Install Script I used to determine this information. Basically it spits out some Describe information on some standard objects (Account and Contact) as well as a custom object in the source developer org (relax__Job__c). I also spit out the username and profileId of the running user, by which I discovered that the running user is in fact a “phantom” with all the powers of System. Here’s the complete InstallScript. After assembling a debug string, it calls a helper method that emails me the content.


public class InstallScript implements InstallHandler {

	public void onInstall(InstallContext ctx) {

		String username = UserInfo.getUserName();
		String profileId = UserInfo.getProfileId();
		String debugString =
			'Username: ' + ((username != null) ? username : 'null')
			+ 'ProfileId: ' + ((profileId != null) ? profileId : 'null')
			+ ', Contact.Accessible: ' + Contact.SObjectType.getDescribe().isAccessible()
			+ ', Contact.LastName.Accessible: ' + Contact.LastName.getDescribe().isAccessible()
			+ ', Account.Accessible: ' + Account.SObjectType.getDescribe().isAccessible()
			+ ', Account.Name.Accessible: ' + Account.Name.getDescribe().isAccessible()
			+ ', relax__Job__c.Accessible: ' + relax__Job__c..SObjectType.getDescribe().isAccessible()
			+ ', relax__Job__c.relax__Apex_Class__c.getDescribe().isAccessible(): ' + relax__Job__c.relax__Apex_Class__c.getDescribe().isAccessible();

		JobScheduler.SendDebugEmail(
			debugString,debugString,'Debug from Relax Install Script in org ' + ctx.organizationId(),'myname@mycompany.com'
		);
	}

	/////////////////
	// UNIT TESTS
	/////////////////

	private static testMethod void TestInstall() {
		InstallScript is = new InstallScript();
    	Test.testInstall(is, null);
    	Boolean b = false;
    	System.assertEquals(false,b);
	}
}

And… here is the debug output:

Username: 033e0000000h2ydias@00dd0000000bt2keaq
ProfileId: 00ed0000000SUxMAAW
Contact.Accessible: false
Contact.LastName.Accessible: false
Account.Accessible: false
Account.Name.Accessible: false
relax__Job__c.Accessible: false
relax__Job__c.relax__Apex_Class__c.Accessible: false

As you can see, Schema methods universally return false, and a quick trip to SOQLExplorer will reveal that neither the User or Profile executing this Script exists in either Source or Destination org.

Making the Switch

So, bugs and weirdness aside, Install Scripts can still do some rocking things. At Skoodat, we’ve used them to populate certain configuration objects from JSON/XML stored in Static Resources included with our package, as well as to insert default custom settings. Here’s an InstallScript that would tackle the second of these:


public class InstallScript implements InstallHandler {

   public void onInstall(InstallContext ctx) {

      // Create an Org-Default record
      // of a Hierarchy Custom Setting
      // included in our package
      insert new MySetting__c(
         // Makes an Org Default setting
         SetupOwnerId = UserInfo.getOrganizationId(),
         SettingField__c = 'JediWarrior';
      );

   }

   /////////////////
   // UNIT TESTS
   /////////////////

   private static testMethod void TestInstall() {

      // Delete any Org-Default Custom Settings
      delete [select Id from MySetting__c
         where SetupOwnerId = :UserInfo.getOrganizationId()];

      // Run our post-install script,
      // simulating a no-prior version installed scenario
      InstallScript is = new InstallScript();
      Test.testInstall(is, null);

      // Verify that an Org-Default setting was created
      MySetting__c c = MySetting__c.getOrgDefaults();

      System.assertNotEquals(null,c);
      System.assertEquals('JediWarrior',c.SettingField__c);

   }
}

So much easier than making a custom setup page, and it happens instantly!

We’re definitely looking forward to more improvements to this feature in Winter 13 and beyond, especially when we can have public implementations of global interfaces. Moreover, Winter 13 will include some sweet new methods for working with Static Resource data in Tests, so I may just be convinced to post an example of creating custom config object records in an InstallHandler from data stored in Static Resources as JSON/XML — so keep posted!

UPDATE 6/25/2014

I recently stumbled upon a post by Matt Bingham on Salesforce StackExchange where he documents that there is a huge difference between marking your Install Script as without sharing , e.g.

public class MyInstallScript implements InstallHandler {

}

is NOT the same as

public without sharing class MyInstallScript implements InstallHandler {

}

In fact, marking your class as without sharing grants it permission to

  • view all data
  • modify all data
  • interrogate system data (like CronTrigger and ApexClass)

I have also seen that there are some objects, such as the Chatter Group object (CollaborationGroup) that you can only interact with fully from Install Scripts in this “super-user” mode.

Regarding inaccurate PermissionSet / Profile data, I have also run into another frustrating wrinkle: the Install Script User does have a Profile, but this Profile is not queryable, and so any logic in your app that you have related to querying on the running user’s Profile or PermissionSetAssignments will NOT work, because (a) the User’s Profile record does not exist (you get the error “SObject has no rows for assignment”) and (b) the PermissionSetAssignments relationship on User is not visible from the InstallScript context (you get the error “Didn’t understand relationship PermissionSetAssignments”). Therefore, you’ll have to write special logic related to the scenario where the User’s Profile record does not exist, to avoid query exceptions.

 

We at Skoodat are considering exposing the platform for we use for rapidly building and deploying killer custom UI’s on Force.com — aka Skoodat Skuid. But we’re looking for your feedback: are we crazy, or does this totally excite you? Let us know!

Skoodat Skuid — Wicked cool UI platform for Force.com, or not?

**Quick check to make sure this post is worth 5 minutes of your precious work day:**

If you have ever wanted:

(1) To run long chains of Batch Apex jobs in sequence without using up scheduled Apex jobs
(2) To run batch/scheduled Apex as often as every 5 minutes every day
(3) To manage, mass-edit, mass abort, and mass-activate all your scheduled and aggregable/chained Apex jobs in one place, in DATA
(4) To avoid the pain of unscheduling and rescheduling Apex jobs when deploying

Then keep reading 🙂

A recent (well, maybe not so recent — is it June already???) blog post by Matt Lacey really struck a chord with me. Matt highlights one of the biggest developer woes related to using Scheduled Apex — whenever you want to deploy code that is in some way referenced by an Apex Class (or classes) that is/are scheduled, you have to unschedule all of these classes. With Spring 12, Salesforce upped the limit on the number of Scheduled Apex Classes from 10 to 25 (ha–lellujah! ha–lellujah!). However, with 15 more scheduled jobs to work with, this have-to-unschedule-before-deploying problem becomes even more of a pain.

But this isn’t the only woe developers have related to asynchronous Apex. An issue that has shown up a lot lately on the Force.com  Developer Boards and LinkedIn Groups is that of running Batch Apex jobs in sequence — run one batch job, then run another immediately afterwards. Cory Cowgill published an excellent solution for accomplishing this, and this has been the go-to method for linking Batch Apex jobs ever since. But one of the problems with his method is that it leaves a trail of scheduled jobs lying in its wake — seriously cluttering your CronTrigger table. If you want to run these batch process sequences often — e.g. kicking them off from a trigger, or running them every day (or even every hour!), you have to start “managing” your scheduled jobs more effectively.

A third issue often cited by developers is the frequency at which jobs can be run. Through the UI, a single CronTrigger can only be scheduled to run once a day. Through code, you can get down to once an hour. If you wanted to, say, run a process once every 15 minutes, you’d have to schedule the same class 4 times — using up 4/25 (16%) of your allotted scheduled jobs — and you have to manage this through Apex, not the UI.

As I mulled over these issues, I thought, there’s got to be a better way.

There is.

Enter Relax

Relax is a lightweight app I’m about to release, but before I do, I’m throwing it out for y’all to try as a public beta. (The install link is at the end of the post, but restrain yourself, we’ll get there 🙂 )

Broadly speaking, Relax seeks to address all of these challenges, and a whole lot more.

At a high level, here are a few of the things it lets you do:

  1. Manage all your individually-scheduled Apex jobs in data records (through a Job__c object). Jobs can be mass scheduled and mass aborted, or mass exported between orgs, and then relaunched with clicks, not code.
  2.  Create and schedule ordered Batch Apex “chains” of virtually unlimited length, with each “link” in the chain kicking off the next link as soon as it finishes. And all of your chained processes are handled by, on average, ONE Scheduled Job.
  3. Schedule jobs to be run as often as EVERY 1 MINUTE. ONE. UNO.
  4. Run ad-hoc “one-off” scheduled jobs without cutting into your 25
Let’s jump in.
 
Individual vs. Aggregable
In Relax, there are 2 types of jobs: individual and aggregable. You create a separate Job__c record corresponding to each of your Individual jobs, and all of the detail about each job is stored in its record. You can then separately or mass activate / deactivate your jobs simply by flipping the Active? checkbox. Each Individual job is separately scheduled — meaning there’s a one-to-one mapping between a CronTrigger record and an Individual Job__c record. Here’s what it looks like to create an Individual Job. You simply specify the CRON schedule defining when the job should run, and choose a class to run from a drop-down of Schedulable Apex Classes.

Aggregable jobs, on the other hand, are all run as needed by the Relax Job Scheduler at arbitrary increments. For instance, you could have an aggregable job that runs once every 5 minutes that checks for new Cases created by users of your public Force.com Site, whose Site Guest User Profile does not have access to Chatter, and inserts Chatter Posts on appropriate records. You could have a job that swaps the SLA of Gold/Bronze SLA Accounts once every minute (contrived, yes, but OH so useful 🙂 Or you could have a series of 5 complex batch apex de-duplication routines that need to be run one after the other, set them all up as separate aggregable jobs, assign orders to them, and have the entire series run once every 15 minutes, every day. Here’s what the SLA swapper example looks like:

How do I use it?
 What do you have to do for your code to fit into the Relax framework? It’s extremely simple. For Individual Jobs, your Apex Class just needs to be Schedulable. For Aggregable Jobs, there are several options, depending on what kind of code you’d like to run. For most devs, this will be Batch Apex, so the most useful option at your disposal is to take any existing Batch Apex class you have and have it extend the “BatchableProcessStep” class that comes with Relax:

// relax's BatchableProcessStep implements Database.Batchable<sObject>,
// so all you have to do is override the start,execute,and finish methods
global class SwapSLAs extends relax.BatchableProcessStep {

	// Swaps the SLA's of our Gold and Bronze accounts

        // Note the override
	global override Database.Querylocator start(Database.BatchableContext btx) {
		return Database.getQueryLocator([
                     select SLA__c from Account where SLA__c in ('Gold','Bronze')
                ]);
	}

	global override void execute(Database.BatchableContext btx, List<SObject> scope) {
		List<Account> accs = (List<Account>) scope;
		for (Account a : accs) {
			if (a.SLA__c == 'Gold') a.SLA__c = 'Bronze';
			else if (a.SLA__c == 'Bronze') a.SLA__c = 'Gold';
		}
		update accs;
	}

        // The ProcessStep interface includes a complete() method
        // which you should call at the end of your finish() method
        // to allow relax to continue chains of Aggregable Jobs
	global override void finish(Database.BatchableContext btx) {
		// Complete this ProcessStep
		complete();
	}

}

That’s it! As long as you call the complete() method at the end of your finish() method, relax will be able to keep infinitely-long chains of Batch Jobs going. Plus, this framework is merely an extension to Database.batchable — meaning you can still call Database.executeBatch() on your aggregable Batch Apex and use it outside of the context of Relax.

Relax in action

In our org, we have 4 jobs set up: 1 individual, 3 aggregable. To kick them off, we select all of them, and change their Active? field to true. Here’s what it looks like after we’ve mass-activated them:

And here’s what the Scheduled Jobs table (accessible through the Setup menu) looks like. Since Case Escalator was set to be run individually, it has its own job. Then there’s a single “Relax Job Scheduler”, which runs every 5 minutes (I’ll probably drop this down to every 1 minute once I officially release the app), checking to see if there are any aggregable Jobs that need to be run, and running them:

Every time Relax Job Scheduler runs, the very first thing it does is schedules itself to run again — regardless of what happens during any processing that it initiates. It queries the “Next Run” field on each Active, Aggregable Job__c record that’s not already currently being run, and if Next Run is less than now, it queues it up to be run as part of a Relax “Process”. Each Process can have an arbitrary number of ProcessSteps, which will be executed sequentially until none remain. If both the current and next ProcessSteps are BatchableProcessSteps, Relax uses a “Process Balloon” to keep the Process “afloat” — essentially launching a temporary scheduled job that is immediately thrown away as soon as the next ProcessStep is begun.

One-off Jobs

Another powerful feature of Relax is the ability to effortlessly launch one-off, one-time scheduled jobs, without having to worry about cluttering the CronTrigger table with another scheduled job. It takes just 1 line of code, AND you can specify the name of the class to run dynamically — e.g. as a String! Reflection ROCKS!!!

// Schedule your job to be run ASAP,
// but maintain the Job__c record so that we can review it later
relax.JobScheduler.CreateOneTimeJob('myNamespace.AccountCleansing');

// Schedule your job to be run 3 minutes from now,
// and delete the Job__c record after it's run
relax.JobScheduler.CreateOneTimeJob(
    'SwapSLAs',
    Datetime.now().addMinutes(3),
    true
);

Try it out for yourself!

Does this sound awesome to you? Give it a test run! Install the app right now into any org you’d like! Version 1.1 (managed-released)

Please send me any feedback you may have! What would make this more usable for you? I’ll probably be releasing the app within the next month or so, so stay tuned!

When I hear the words “Reports” and “Managed Packages” in the same sentence, I involuntarily let out a grunt of displeasure. Ask any seasoned ISV, and I guarantee you that the same sour taste fills their mouths. Why? Well, here’s the classic problem: An ISV includes some Reports in their managed package. Now, a common trick for making Reports “dynamic” is to leave one of the Filter Criteria blank and then have its value passed in through query string parameters using the “pv<n>” syntax, where n is the parameter you’d like to populate. For example, in this report of Enrollments at a given School, parameter 2 (0-indexed) is left BLANK:

Then, if we load up this page with query string parameter “pv2” set to the name of a School, like so:

the value that we passed in will be dynamically inserted into the 2nd Filter Criterion, and we’ll have a report on Enrollments at Lincoln High School:

This is awesome, right? Quick, let’s throw a Custom Link on the Account Detail Page called “Enrollments” that links to this report, but passing in the Id of the Report! Yeah, yeah, yeah! I love this!

Hold your horses, partner.

This is where the ISV’s hang their heads in sadness… sorry son, it just ain’t that easy.

What’s the matter, grandpa? Come on, this is child’s play!

Not quite.

Notice where we said that we’d be passing in the ID of the Report. Hard-coded. For an ISV, ID’s are the ultimate  taboo. Why? Well, sure, you can package up the Report and Custom Link. But as soon as you install the package into a customer’s org, the Id of the Report has CHANGED — and the link is BROKEN. It’s a situation that the typical one-org Admin will never have to face, but, well, welcome to the world of the ISV.

Isn’t there one of those handy Global Variables which lets you grab the Name or DeveloperName of a Report?

Nope, sorry partner.

So, what DO you do?

Well, you write a ‘ViewReport’ Visualforce page that takes in the API name of a Report — which does NOT change across all the orgs that a package is installed into — and uses this API name to find the ID of the Report and send you to it. What does this look like?

The Visualforce is simple — one line, to be exact:


<apex:page controller="ViewReportController" action="{!redirect}"/>

The Apex Controller is a little more interesting. Here’s the meat, with test code included that achieves 100% coverage (so you can start using it right away!!!):


public class ViewReportController {

    // Controller for ViewReport.page,
    // which redirects the user to a Salesforce Report
    // whose name is passed in as a Query String parameter

    // We expect to be handed 1-2 parameters:
    // r: the DEVELOPER name of the Report you want to view
    // ns: a salesforce namespace prefix (optional)
    public PageReference redirect() {
        // Get all page parameters
        Map<String,String> params = ApexPages.currentPage().getParameters();

        String ns = params.get('ns'); // NamespacePrefix
        String dn = params.get('dn'); // DeveloperName

        List<Report> reports;

        // If a Namespace is provided,
        // then find the report with the specified DeveloperName
        // in the provided Namespace
        // (otherwise, we might find a report in the wrong namespace)
        if (ns != null) {
            reports = [select Id from Report
                  where NamespacePrefix = :ns
                  and DeveloperName = :dn limit 1];
        } else {
            reports = [select Id from Report where DeveloperName = :dn limit 1];
        }

        PageReference pgRef;

        // If we found a Report, go view it
        if (reports != null && !reports.isEmpty()) {
            pgRef = new PageReference('/' + reports[0].Id);
            // Add back in all of the parameters we were passed in,
            // MINUS the ones we already used: ns, dn
            params.remove('ns');
            params.remove('dn');
            pgRef.getParameters().putAll(params);
        } else {
            // We couldn't find the Report,
            // so send the User to the Reports tab
            pgRef = new PageReference('/'
                + Report.SObjectType.getDescribe().getKeyPrefix()
                + '/o'
            );
        }

        // Navigate to the page we've decided on
        pgRef.setRedirect(true);
        return pgRef;

    }

    ////////////////////
    // UNIT TESTS
    ////////////////////

    // We MUST be able to see real Reports for this to work,
    // because we can't insert test Reports.
    // Therefore, in Spring 12, we must use the SeeAllData annotation
    @isTest(SeeAllData=true)
    private static void Test_Controller() {
        // For this example, we assume that there is
        // at least one Report in our org WITH a namespace

        // Get a report to work with
        List<Report> reports = [
            select Id, DeveloperName, NamespacePrefix
            from Report
            where NamespacePrefix != null
            limit 1
        ];

        // Assuming that we have reports...
        if (!reports.isEmpty()) {
            // Get the first one in our list
            Report r = reports[0];

            //
            // CASE 1: Passing in both namespace, developername,
            // and a parameter value
            //

            // Load up our Visualforce Page
            PageReference p = System.Page.ViewReport;
            p.getParameters().put('ns',r.NamespacePrefix);
            p.getParameters().put('dn',r.DeveloperName);
            p.getParameters().put('pv0','llamas');
            p.getParameters().put('pv2','alpacas');
            Test.setCurrentPage(p);

            // Load up our Controller
            ViewReportController ctl = new ViewReportController();

            // Manually call the redirect() action,
            // and store the page that we are returned
            PageReference ret = ctl.redirect();

            // We should be sent to the View page for our Report
            System.assert(ret.getURL().contains('/'+r.Id));
            // Also, make sure that our Filter Criterion values
            // got passed along
            System.assert(ret.getURL().contains('pv0=llamas'));
            System.assert(ret.getURL().contains('pv2=alpacas'));

            //
            // CASE 2: Passing in both just developername
            //

            // Load up our Visualforce Page
            p = System.Page.ViewReport;
            p.getParameters().put('dn',r.DeveloperName);
            Test.setCurrentPage(p);

            // Load up our Controller
            ctl = new ViewReportController();

            // Manually call the redirect() action,
            // and store the page that we are returned
            ret = ctl.redirect();

            // We should be sent to the View page for our Report
            System.assert(ret.getURL().contains('/'+r.Id));

            //
            // CASE 3: Passing in a nonexistent Report name
            //

            // Load up our Visualforce Page
            p = System.Page.ViewReport;
            p.getParameters().put('dn','BlahBLahBlahBlahBlahBlah');
            Test.setCurrentPage(p);

            // Load up our Controller
            ctl = new ViewReportController();

            // Manually call the redirect() action,
            // and store the page that we are returned
            ret = ctl.redirect();

            // We should be sent to the Reports tab
            System.assert(ret.getURL().contains(
                '/'+Report.SObjectType.getDescribe().getKeyPrefix()+'/o'
            ));

        }

    }

}

And here’s an example of using this code in a Custom Link:

Voila! A ISV-caliber way to link to any Report in any Managed Package, and it won’t break in an org where the package is installed!

The basic flow of the Apex is pretty simple: the redirect method gets called immediately upon page load, and it returns a page reference to redirect the user to. So all that Apex needs to do for us is find the Report with the provided API name / DeveloperName (and optionally in a specific namespace), and send us to /<Id>, where <Id> is the Id of that Report. Pretty straightforward. Just a few interesting points:

  1. We ‘tack-on’ to the resultant page reference any OTHER page parameters that the user passed along, so we can pass in any number of dynamic Filter Criteria parameters using the pv<n> syntax.
  2. You may be wondering — wait, you can QUERY on the Report object? Yep! Reports are technically SObjects, so you can query for them, but they fall under the mysterious category called “Setup” objects which, among other peculiar quirks (Google “MIXED_DML_OPERATION” for one of the more annoying ones), only selectively obey CRUD and only expose some of their fields to SOQL. Fortunately, for our purposes, the Id, Name, DeveloperName, and NamespacePrefix fields are all included in this short list. Actually, fire up SOQLXplorer — you might be inspired by some of the other fields that are exposed.
  3. Namespacing of Reports — Reports included in Managed Packages don’t have to have a globally unique name — they only have to be unique within their Namespace. Therefore, when querying for Reports, it’s best to query for a report within a particular namespace.
  4. If the Report is not found — in our example, we send the User to the Reports tab. You might want to do something different.

Anonymous Apex can be extremely useful when trying to perform complex, cross-object, bulk data manipulation tasks that might only need to be done once, and never used again. For such situations, writing a piece of code to achieve your functionality, writing test code, and deploying it to production — when you might never use the code again — doesn’t make much sense. Enter Anonymous Apex!

However, using Anonymous Apex can potentially be VERY dangerous. You could royally mess up a lot of data, and never even know until it was too late. Unless…

You take some precautions.

What if you could verify that your long, convoluted Anonymous Apex routine was actually doing what it was supposed to do as it proceeded, similar to the way Apex Unit Tests work with System.asserts?

Well, you can! Anonymous Apex supports all of the System.assert statements, AND it supports Database.rollbacks. When the two are used in tandem, you get a powerful disaster prevention mechanism that makes Anonymous Apex a lot less risky and hackish.

Example 1: Find all unresolved, non-high Priority Cases over a year old related to major Accounts (i.e. those with really big Closed Opportunities, and make them High or Critical priority, and Escalate them to the top Service representative


List bigDeals = 
	[select AccountId 
	from Opportunity
	where Amount > 400000 
	and StageName = 'Closed Won']; 
	
// Sanity check --- make sure we found some deals
System.assert(bigDeals.size() > 0,
	'We did not find any Big Deals'); 

Set accountIds = new Set(); 
for (Opportunity opp : bigDeals) {
	if (!accountIds.contains(opp.AccountId)) {
		accountIds.add(opp.AccountId);
	}
}
// Sanity check
System.assert(accountIds.size() > 0, 
	'We did not find any Account Ids'); 

List unresolvedCases = 
	[SELECT Id, Priority, Status, Reason 
	FROM Case 
	WHERE Status != 'Closed' 
	AND AccountId in :accountIds 
	AND Priority not in ('High','Critical')
	AND CreatedDate = TODAY]; 

// Sanity check 
System.assert(unresolvedCases.size() > 0,
	'We did not find any unresolved Cases tied to our Big Deals'); 

// Find our best Support Rep,
// to whom we will reassign our highest priority cases
User awesomeRep = 
	[select id,LastName 
	from User 
	where Email = 'awesome_support_rep@company.com' 
	limit 1];
	
// Make sure we found the User we think we did
System.assertNotEquals(null, awesomeRep, 'awesomeRep'); 
System.assertEquals(
	'Rep Last Name',
	awesomeRep.LastName,
	'awesomeRep.LastName'
);

Integer numDefects = 0;
Integer numOtherCases = 0; 

for (Case c : unresolvedCases) { // Escalate the case
	c.Status = 'Escalated'; 
	// If the Case Reason is 'Defect', 
	// the Priority should be 'Critical' 
	if (c.Reason == 'Defect') { 
		numDefects++; 
		c.Priority = 'Critical';
	}
	// Otherwise, Priority should be 'High' 
	else { 
		numOtherCases++; 
		c.Priority = 'High'; 
	} 
	// Set the Owner to 
	if (c.OwnerId != awesomeRep.Id) c.OwnerId = awesomeRep.Id; 
}
// Sanity check
System.assert(numDefects > 0 || numOtherCases > 0, 
	'we only want to proceed '
	+ 'if we updated at least some of our cases'); 

// Sanity check --- force code to stop here
// to let us examine how things went 
// before we actually do the update
System.assert(false,'Sanity Check!'); 

Savepoint sp = Database.setSavepoint();
try { 
	update unresolvedCases; 
} catch (Exception ex) { 
	// If we catch any errors, 
	// rollback the database to how it was 
	// prior to the DML call
	Database.rollback(sp);
	// Then display the exception
	System.assert(false,ex.getMessage()); 
}

I often find that code that starts out as Anonymous Apex routines eventually makes its way into actual classes / controllers / triggers —- and Unit Tests.

If you haven’t used Anonymous Apex before, you can use it from all sorts of locations — from the Fore.com System Log (old or new — the new version has excellent code/syntax highlighting), from the Force.com IDE, or from SoqlXplorer (on the Mac)).