Skip navigation

Today I was creating some new Salesforce Permission Sets for objects with hundreds of fields, where the PermissionSet needed access to just about every field on the object. If you’ve ever been in this scenario, you know what this means — get ready to manually check hundreds of checkboxes!

LotsAndLotsOfFields

Being a developer, any time you have to do something repetitive, you start going nuts — there’s got to be a way to do this faster! 

In this case, the solution seemed pretty straightforward: just use jQuery to go find all checkboxes, and check them! From Chrome, Firefox, or Safari, just do a right-click, select “Inspect Element”, and then enter this one easy line of code:

jQueryInConsole

That line of code, again, is:

jQuery('input[type="checkbox"]').prop('checked',true);

This should work great, but there’s just one problem — Salesforce standard pages don’t usually include jQuery!

This leaves us with two options. One, we can resort to native DOM manipulation methods. If you are very comfortable with native DOM methods, then this is the way to go. Here’s the code:


// document.querySelectorAll returns a NodeList,
// so in order to leverage the ECMAScript 5 "forEach" method of Arrays,
// we'll convert our NodeList to an Array using slice

Array.prototype.slice.call(document.querySelectorAll('input[type="checkbox"]')).forEach(function(e){e.setAttribute("checked","checked");});

For a comparison of the complexity of these two approaches, take a look at this Gist.

If the DOM works for you, you’re done! However, if you don’t think you can remember how to code this every time, and/or prefer jQuery, never fear — there’s a really easy way to inject jQuery into any page you may be visiting: the jQuerify Chrome Extension. To get it, just go to the Chrome Extensions library and search for “jQuerify”:

jQueryify

Once you’ve added this extension, you now get a handy little button at the top right corner of Chrome, that lets you, with one-click, embed jQuery into any page, including your Salesforce PermissionSet / Profile editor!

ButtonOnPage

Now, you can confidently run the aforementioned line of jQuery code, and ALL Checkboxes will be automatically checked! Yahoo!

AllChecked

Whichever method you use, DOM Native or jQuery, all it takes to repeat this magic when you move to the next Object with hundreds of fields is to go back to the Developer Console, hit the Up Arrow key on your keyboard, and voila, you’ve got your last-run script. Hit return, and the magic repeats!

Use Firefox and Firebug instead of Chrome? There’s an identical plugin for Firebug, called Firequery.

One of the most hyped, and in my opinion least documented features of the Summer 13 release of Salesforce is custom Chatter Publisher Actions. Various Salesforce bloggers have written about how easy it is to create and use regular Chatter Publisher Actions, which, basically, just let you quickly create records related to a primary record from your Chatter Feed (e.g. from an Account’s Chatter feed, creating a Contact or Opportunity, without having to go down to the corresponding related lists. For one excellent discussion of regular Publisher Actions, see Daniel Hoechst’s post. The release notes also give examples of regular Publisher Actions, so I won’t go into any more depth on them here.

However, neither the release notes, nor any other bloggers, nor Salesforce themselves in their recent release webinars, have yet shown any working examples of custom Publisher Actions. What are they? Here’s a brief overview:

  • They come in two flavors: Global and Object-Specific.
    • Global custom Actions: can be added to any Chatter Publisher Layout, anywhere.
    • Object-specific custom Actions: can only be added to one specific object’s Publisher Layouts.
  • You create them using Visualforce Pages.
    • For Global custom actions, your Visualforce page must either have no controller, or a custom controller that is NOT an extension.
    • For Object-specific custom Actions, your Visualforce page must use the standard controller of the object you’d like to use the Action with.
  • Chatter implements custom Actions using iFrames.
    • When creating a new custom Action, you must specify a height for your Action — this is setting the height of the iFrame in which your page will be included.
    • This iFrame is always run from within the Visualforce Page’s domain. So, if it’s a local VF Page, the domain will be something like “c.na14.visual.force.com”, whereas if the page is included in a managed package, the domain will be something like “skuid.cs15.visual.force.com”.

So custom Publisher Actions are, by virtue of having to be created through Visualforce Pages, and being embedded in iFrames, already more complicated to develop. This begs the question: what sort of “Action” would be compelling enough to warrant going through this hassle? What’s a good use case?

A use case: create a Contact and linked Opportunity Contact Role all at once, from an Opportunity

Many of the examples I thought of all fall into one category: creating records of multiple objects all at once, in particular, creating a junction object and creating the joined record if it doesn’t exist yet. For example, from an Opportunity record, you often want to create a new Opportunity Contact Role, but can’t because the related Contact doesn’t exist yet. Or, in a hypothetical iTunes-on-Force.com app, from an Album’s page, create a “Genre Association” junction object as well as a new Genre if it doesn’t exist yet.

I finally settled on the “Create a Contact and Opportunity Contact Role all at once” example.

Making it happen

The first step in creating a Custom Publisher Action is to create the Action’s associated Visualforce Page. For this example, my Custom Action is specific to the Opportunity object, so our Page must use the Opportunity Standard Controller. As for the body of the page, since I’m using Skuid, I just need to specify the name of a Skuid Page I want to include, which we’ll build later.

VisualforcePage_ForFeedAction

Now that we have a Visualforce Page associated to the Opportunity standard controller, we can create our Action!

The first step is to go to the newly-reorganized Opportunity “Buttons, Links, and Actions” section. In Summer 13, Salesforce has consolidated Custom Buttons, Custom Links, Standard Actions and Overrides (e.g. the place where you can override what happens for the “View”, “Edit”, “New”, and “Tab” action for each object), and the new Chatter Publisher Actions, into this one list for each Object.

Click on “New Action”, and then enter the following:

ButtonsLinksAndActions NewAction

Notice two things:

  1. The “Height” attribute corresponds directly to the height of the iFrame which will embed your Visualforce page
  2. The “Label” is the actual label that users will see for your new Publisher Action, e.g. they’ll see “Link”, “File”, “Post”, “Poll”, and “Add Contact”.

Now that we’ve created the action, we need to actually add it to one of our Opportunity page layout’s Publisher Actions area. By default, all objects will inherit from the default “Global” Publisher Action area, so we’ll need to explicitly “break” this inheritance to allow the set of Publisher Actions that are displayed on our Opportunity layout(s) to be different than the Publisher Actions for all other objects. TO do this, click “override the global layout” in the “Publisher Actions” area of one of your Opportunity page layouts.

OverrideTheGlobalLayout

Next, drag in the “Add Contact” Publisher Action we just created, and position it as you’d like relative to the other Publisher Actions:

DragInNewAction

Once we save the Page Layout, we’re ready to roll! This Publisher Action should show up in our Opportunity page’s Chatter feed. Because we positioned it last in the list, and there’s more than 4 Publisher Actions, our Action shows up under the “More” area:

SelectingThePublisherAction

Now, we’ll show how we built this with Skuid later, but here’s what our custom Publisher Action actually looks like:

PublisherAction_InAction

We’re presented with two columns of data. In the first column, we enter some bare bones info for a new Contact record we want to create that’s associated with this Opportunity’s Account. In the second column, we define the Opportunity Contact Role. Once we click Save, what Skuid will do is the following:

  1. Create a new Contact record using the fields we filled out, with the AccountId of the Opportunity in context (“Acme”)
  2. Create a new Opportunity Contact Role record, using the fields we entered, and associated to the Opportunity in context (“Acme”) and our newly created Contact (“Bilbo Baggins”)
  3. Create a new post in this Opportunity’s Chatter Feed describing what just happened.
  4. Refreshing the page — we’ll describe why this is necessary in a second.

In the Skuid page I built in place of the standard Opportunity detail page, I display Opportunity Contact Roles in a “Key People” tab. As you can see, a new “Bilbo Baggins” contact record was created, and an Opportunity Contact Role record linking him to the “Acme – 500 Widgets” Opportunity was created as well.

ContactRole_CreatedByFeedAction

Finally, a new FeedItem was created that records the addition of the new Contact and his Role:

FeedItem_CreatedByAction

The Skuid page that made the real magic happen

Getting the actual Publisher Action configured is pretty quick once you’ve got a Visualforce Page on hand to use. But it’s this Visualforce Page, of course, which has to do all the hard work to make the custom Publisher Action useful.

Using Skuid as the content of the Publisher Action made the process very fast and fairly painless. In a nutshell, here’s what I did:

(1) Create a new Skuid Page using the “New Page” template on the Contact object.

The “New Page” template, here, scaffolds a basic Skuid Page for use in creating brand new Contact records. It sets up a Model on the Contact page with “Create Default Row if None Exists” pre-checked, and adds a Field Editor component with the FirstName and LastName fields already thrown in:

NewSkuidPage

ContactModelProperties

(2) Add in 3 additional Models: on the Opportunity, OpportunityContactRole, and FeedItem objects:

AddlModels

The Opportunity Model grabs the Opportunity record in context from the page, using the “id” URL Parameter. We then grab the AccountId lookup field from the Opportunity, so that we can use this to automatically prepopulate the AccountId of our newly-created Contact:

OppConditions OppFields

For the Opportunity Contact Role model, we pull in the Role and IsPrimary fields so that we can display them in the page, and we add 2 “Model Merge” Conditions to associate the new record to the context Opportunity and the newly-created Contact:

OppCRConditions

Finally, for the FeedItem model, we add a single condition to automatically associate the new FeedItem to the Opportunity in context, and we add in the Body field, which we will edit in our Save action:

FeedItemConditions FeedItemFields

(3) Place 2 Field Editors, one each for the Contact and ContactRole Models, in separate Panels:

PanelSetLayout

(4) Embed this Panelset within the first step of a new Wizard Component

-We could have used a PageTitle component instead, but we do this because it allows for easy transition into multi-step wizards. We could easily have put the Opportunity Contact Role fields within a separate step of the wizard, and had a “Next Step”/”Previous Step” sequential navigation here.

Wizard

(5) Create a custom Inline (Snippet) to use for our “Save All” Wizard Step action

Skuid’s standard wizard action types almost suffice here for achieving everything we want to do, but we want to automatically populate the Body field of our new FeedItem, which takes a little code, and,  because we’re using this Page as a Chatter Publisher Action, which is run in an iFrame, we run in to a Gotcha involving using the URL Redirect action, as we need to reload the location of our frame’s parent, not the frame itself (window).

First, we customize our Action to execute a Snippet called Save All:

SaveAllStepAction

Finally, we create the Inline (Snippet) itself:

InlineSnippet

//
// Get our Models
var contactModel = skuid.model.getModel('ContactData'),
    contactRoleModel = skuid.model.getModel('ContactRole'),
    feedItemModel = skuid.model.getModel('FeedItem');

// Get the first rows in each of our Models
var contact = contactModel.getFirstRow(),
    contactRole = contactRoleModel.getFirstRow(),
    feedItem = feedItemModel.getFirstRow();

var bodyText = 'Added new ' + contactRole.Role
    + ' to this Opportunity: '
    + contact.FirstName + ' ' + contact.LastName
    + ' (' + contact.Title + ').';

// Update the Body of our new Feed Item
feedItemModel.updateRow(feedItem,'Body',bodyText);

// Save our Models,
// and when we're done,
// refresh the iframe's parent

skuid.model.save(
    [contactModel,contactRoleModel,feedItemModel]
,{ callback: function() {
    parent.location.reload();
}});

In the Snippet, we do the following:

  1. Get references to 3 of our Models
  2. Get references to their first rows
  3. Populate the “Body” field of the FeedItem model’s first row with the Role field from our OpportunityContactRole record and the First and Last Name fields from our Contact record
  4. Save all 3 of our Models in sequence, starting with the Contact model (VERY important) so that the later models can use the newly-created Contact’s Id to populate their new records.
  5. Once the save is done, we reload our parent window. This is necessary because this Skuid page will be stuck into an iFrame by Chatter.

Thorny Gotchas, and the undocumented Chatter Publisher API

The basic buildout of this took less than 30 minutes — seriously! However, we spent some time trying to work out how best to “talk” to the Chatter Publisher using the undocumented Chatter Publisher API — an effort which ended in nothing but frustration.

The thorny gotchas to avoid are mostly related to the fact that custom Publisher Actions are implemented using iFrames:

  1. Any  JavaScript you write in your child Visualforce Page will be unable to talk to the parent page UNLESS both your page and the page containing the Chatter component are in the same domain, due to cross-domain script issues — which is impossible unless BOTH pages are Visualforce. Consider these scenarios:
    1. Chatter Feed in “na15.salesforce.com”, Custom Action VF Page in “c.na15.salesforce.com” — BLOCKED.
    2. Chatter Feed in “c.na15.visual.force.com”, Custom Action VF Page in “skuid.na15.visual.force.com” — BLOCKED.
    3. Chatter Feed in “c.na15.visual.force.com”, Custom Action VF Page in “c.na15.visual.force.com” — SUCCEEDS.
    4. Chatter Feed in “skuid.na15.visual.force.com”, Custom Action VF Page in “skuid.na15.visual.force.com” — SUCCEEDS.
  2. We couldn’t find a way, using the undocumented Chatter API, to “refresh” the Chatter Feed to show our newly-created Feed Item, so we had to do a full page refresh to get the Chatter Feed to show this. We tried the following methods, none of which worked:
    1. chatter.getPublisher().resetPublisher()
    2. chatter.getPublisher().submit()
    3. chatter.getFeed().refresh()
    4. chatter.getFeed().showNewUpdates()

We would have LOVED to use one of these methods in our Skuid “saveAll” Snippet, but we’ll have to wait for Chatter to document its API.

Screen shot 2013-05-15 at 1.04.16 PM

To help foster an ongoing conversation among Salesforce ISV and OEM partners — aka developers of Salesforce AppExchange apps — I started this discussion on the Salesforce ISV Partners LinkedIn group, which I encourage fellow ISV’s/OEM’s to join:

Let’s pool our thoughts – best practices for Salesforce ISV/OEM app development

One of the best practices I brought up was the need to properly “protect” or “sandbox” your application’s references to external JavaScript libraries within complex “mash-up” pages that include JavaScript code written by various other ISV’s / OEM’s / consultants / developers, as well as underlying Salesforce.com JavaScript code.

These days, more and more apps being developed on the Salesforce Platform rely heavily on external JavaScript libraries such as jQuery, jQuery UI, jQuery Mobile, ExtJS, Sencha Touch, KnockoutJS, AngularJS, Backbone, MustacheJS, Underscore, JSONSelect, etc. Leveraging these libraries is definitely a best practice — don’t reinvent the wheel! As jQuery quips, “write less — do more!” As a Salesforce consultant, I think, this is generally the goal :)

Problems emerge, though when multiple ISV’s include different versions of these JavaScript libraries as Global Variables within their Visualforce Pages or Visualforce Components — because whichever version is loaded last will, by default, overwrite the earlier version. This makes it very difficult to mash-up / integrate Visualforce Components or Pages from multiple ISV’s into a single page. When faced with this, a common developer response is to stop using the latest version of the external library and trying to make their code work against the earlier version of the library forcibly included in the page (perhaps by a managed Visualforce Component or embedded Visualforce Page).

Fortunately, there IS a better way to avoid this.

In a nutshell, the solution is: “protect” or “localize” your references to any external libraries, preferably in a JavaScript namespace corresponding to your managed package.

For instance, if your Salesforce application has the namespace “skuid”, you’re already going to probably have various JS Remoting functions available within the “skuid” JavaScript object that Salesforce automatically creates in pages with controllers with JS Remoting methods — and as an ISV, you are ensured of your managed app’s namespace being Salesforce-unique. So this namespace is just about the safest global variable you can use in the Salesforce world (anyone else who messes with it is being very, very naughty)

As a brief side-note, here’s how to ensure that your app’s “namespace global” has been defined before using it:

// Check that our Namespace Global has been defined as already,
// and if not, create it.
skuid || (window.skuid = {});

To protect your external library references, store a unique reference to these libraries within your namespace’s object, e.g. IMMEDIATELY after loading in an external library, such as MustacheJS,

// Load in Mustache -- will be stored as a global,
// thus accessible from window.Mustache
(function(){ /* MustacheJS 0.7.2 */ })()

// Store a protected reference to the MustacheJS library that WE loaded,
// so that we can safely refer to it later.
skuid.Mustache = window.Mustache;

Then, even if some other VF Component or Page loads in a different version of this library later on, you will still have access to the version you loaded (0.7.2)

// (other ISV's code)
window.Mustache = (function(){ /* MustacheJS 0.3.1 */ })()

// THIS code will run safely against 0.7.2!
skuid.Mustache.render('{{FirstName}} {{LastName}}',contactRecord);

// THIS code, however, would run against the latest version loaded in, e.g. 0.3.1,
// and thus FAILS, (since Mustache 0.3.1 has no render() method)
Mustache.render('{{Account.Name}}',contactRecord);

How to avoid having to use global references all the time

Some of you are probably thinking, “Great, now I have to prepend this global namespace all over the place!” Fortunately, with JavaScript, that’s not necessary. By wrapping your code in closures, you can safely keep using your familiar shorthand references to external libraries without worrying about version conflicts.

For instance, say that your application uses jQuery 1.8.2, but other Visualforce Components on the page are using jQuery as old as 1.3.2! (SO ancient… :) What’s a developer to do?

Well, jQuery provides the helpful jQuery.noConflict() method, which allows you to easily obtain a safe reference to a version of jQuery immediately after you load it into your page. So, as an example, in YOUR code, you need to pull in jQuery 1.8.2:

<!-- load jQuery 1.8.3 -->
<script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script><script type="text/javascript">// <![CDATA[
// Get a safe reference to jQuery 1.8.3 var jQuery_1_8_3 = $.noConflict(true);
// ]]></script>

Then, to your horror, another Visualforce Component, that the customer you’re developing for has “got to have” in this same page (and which you don’t want to iframe…), has loaded in jQuery 1.3.2, but not bothered to namespace it!!! Therefore, both of the commonly-used jQuery globals (jQuery and $), now refer to jQuery 1.3.2!

Fortunately, FOR YOU, you’re safe! You got a protected reference to jQuery 1.8.3, and your code can carry on using $ without any issues, as long as you wrap it in a closure:

(function($){

   $('.nx-table').on('click','tr',function(){
       // Add "edit-mode" styles to this table row
       $(this).toggleClass('edit-mode');
   });

// Identify jQuery 1.8.3 as what we are referring to within this closure,
// so that we can carry on with $ as a shorthand reference to jQuery
// and be merry and happy like usual!
})(jQuery_1_8_3);

This one’s for you, coffee-shop hopping Force.com developers who go to make some changes to your code in Eclipse and — blocked! Login must use Security Token! Uggh. But I hate Security Tokens – sure, sure, it makes way more sense to use them. Definitely a best practice. But, well, sometimes I’m lazy. I confess – I very often just use Trusted IP Ranges.

So, for those of you out there that have been in my situation, and like me don’t like entering Security Tokens into your Eclipse, SOQLExplorer, Data Loader, etc., and prefer to just use Trusted IP Ranges and basic username/password, I’m guessing you find yourselves doing this over and over again:

  1. Show up at Coffee Shop you haven’t been to before – or at which you’ve never done any development on a particular Force.com project / org.
  2. Login to Eclipse.
  3. Try to save something.
  4. Fail – you’re at a non-trusted location and you’re not using the Security Token!
  5. Login to Salesforce.com.
  6. Go to your Personal Information.
  7. Copy the IP address from your most recent login.
  8. Go to Network Access, and add a Trusted IP Range, and paste in the IP Address from Step 7 into both Start and End IP Address. Click Save.
  9. Go back to Eclipse and resave.

Wish there was a way to do this faster?

There is – here’s a little convenience script for the uber-lazy, coffee-shop hopping, Security Token-hating Force.com developer – if there are any of you out there other than me, this if for you!

The Idea: a one-click link in a Home Page Component

My idea: create a Home Page Component for the Left Sidebar that, in one-click, adds a new Trusted IP Range corresponding to the current IP you’re at.

Problem 1: there’s no way to get your current IP address from just pure client-side JavaScript. Lots of server languages can give you this, but not client-side JavaScript.

Problem 2: how to quickly create a new Trusted IP Range? There’s no API access to the Trusted IP Range object (which is probably a good thing from a security perspective :)

Problem 1 Solution: Use JSONP + an External Site

To make this a reality, I took the advice of StackOverflow and leveraged JSONP, which allows you to send an AJAX request to an external website and have the result returned in JSON format to a pre-designated callback JavaScript function, which immediately gets called. The basic syntax of this is

Problem 2 Solution: URL Redirect with params passed in

One thing I love about the Force.com platform is that almost every action you might want to take, whether for modification of metadata or data, can be initiated by going to a specific URL. To create a new Account record, you go to “/001/e”. To view all users, you go to “/005″. And almost any field on a particular page can be pre-populated by passing a corresponding URL Parameter (although these are not always consistent). And so it is with Trusted IP Ranges – to create a new Trusted IP Range, head to “/05G/e”. And to pre-populate the Start and End IP Address, you simply have to pass in the right query string parameters: “IpStartAddress” and “IpEndAddress”.

All together now! The final product.

So here’s the final solution:

  1. Go to Setup > Home Page Components > Custom Components.
  2. Click New.
  3. Enter a name, e.g. “Add IP to Trusted Ranges”, and select “HTML Area” as the type. Click Next.
  4. For Position, select “Narrow (Left) Column”.
  5. Click the “Show HTML” checkbox on the far right, and enter the following code for the body, then click Save.

<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script><script type="application/javascript">  function addIPToTrustedRanges(){  jQuery.getJSON( "https://smart-ip.net/geoip-json?callback=?", function(data){ var ip = data.host; window.location.href= "/05G/e?IpStartAddress="+ip+"&IpEndAddress="+ip; } ); } </script><a href="javascript:addIPToTrustedRanges();">Add Trusted IP (using Smart IP)</a>

6. Go to Home Page Layouts, and add this Home Page Component to your layout as a System Admin.

Now return to your home page, and voila! Here’s what you’ve got: a link that, when you click, it takes you to a prepopulated new Trusted IP Range – just click Save, and boom, get back to Eclipse and save your code!

TrustedIPRange_HomePageComponent

PrepopulatedIPAddressRange

How does it work?

Nothing crazy here, but I’ll step through the code anyhow:

  1. Load jQuery from the Google API’s CDN, so that we can use the handy getJSON() method.
  2. Define a global function called  addIPToTrustedRanges – this function performs an AJAX callout to the endpoint  “https://smart-ip.net/geoip-json&#8221;, passing in the name of a callback function to execute with the JSON data that this endpoint returns. So Smart IP basically returns a little string of data that is a JSON-serialized JavaScript object, whose “host” property is your current IP Address (the real one, not some NAT-translated private IP). Then jQuery calls your callback function, which, in the getJSON() method, is defined as an anonymous function in the second parameter — and passes in  this JSON string as an argument to the function. This is all legal, within the confines of the JavaScript same-origin policy, because of JSONP.
  3. In our anonymous callback function, we do a URL redirect, passing in our current IP address as both the Start and End IP Address as parameters to the new Trusted IP Address page:
    "/05G/e?IpStartAddress="+ip+"&IpEndAddress="+ip;
  4. Then we click Save, and we’re done!

In response to some folks’ questions, I thought I’d write up a very brief summary of the “right” way to go about diving into the Salesforce.com Partner world, whether as an ISV / OEM. There’s an order to this madness that can help prevent “wow, wish I’d done it that way from the beginning” moments later on, plus, there’s a whole alphabet soup thrown around related to Salesforce environments / orgs of different types, and their roles in the partner lifecycle.

A quick check to see whether this article will be helpful. If you can identify the meaning and role of each of the following Salesforce-related abbreviations, you can get on with your day. Otherwise, keep reading!

  • APO
  • LMO
  • PDE
  • TMO
  • ISV
  • DE
  • OEM
  • LMA

What I’ll do in this article is to walk through a “recommended” process for getting started developing Salesforce.com applications, along the way highlighting different Salesforce.com Organization / Environment types, and at the end, I’ll briefly summarize. Here we go!

1. Register as a Salesforce.com Partner.

To do this, go to http://www.salesforce.com/partners/join/ and fill out the form. You’ll receive by email a login to the Salesforce.com Partner Portal, which allows you to do all sorts of necessary things in the Partner lifecycle, like creating special orgs, logging partner support cases, and getting access to special training materials, etc.

2. Start building your Application!

For native Force.com applications, this should take place in your managed-package development org. You will have one, and only one, of these for each Managed Package / App you are building on Salesforce.com. This org must, as the name suggests, be some sort of Developer Edition (DE) org. However, you have choices as to which type of Developer Edition org you can do this in, and one is better than the other:

  • Regular DE Orgs: (totally fine, but not ideal) A regular 2-user DE org such as can be acquired by going to developer.force.com
  • “Partner Developer Edition” (PDE) orgs (ideal, available to registered partners only: see step 1) A “Super-Sized” Developer Edition org, which can be obtained by registered Salesforce.com Partners by logging in to the Partner Portal, clicking “Create a Test Org”, and selecting, as the org type, “Partner Developer Edition”. This is the ideal place to develop a Managed Application for the AppExchange, as you can have up to 20 full Salesforce user licenses, and your data limits are higher than regular DE orgs’ limits, among other advantages.

Whichever of the two org types you go with, for each AppExchange Application that you would like to build, you must choose one org to be where you create your app’s corresponding “Managed Package” – the Salesforce.com application distribution mechanism. You can create a Package in Salesforce by going to Create > Packages. However, you cannot make a package “Managed” – the requirement for an AppExchange org – until you have chosen a Salesforce-unique namespace for the package, e.g. ‘acme’. Once you’ve chosen this, you’ll be able to convert your package, which contains all of your app’s custom objects, fields, Apex code, Visualforce pages/components, and many other possible metadata types, into a Managed package. Each Developer Edition org can only have one Managed Package, and namespaces cannot be migrated between orgs.

For those of you who did not know about the PDE org option, it is important to note that it is perfectly fine to have your managed-package development org be a regular DE org — there is no need for it to be a full PDE for it to be capable of publishing AppExchange apps. It just makes the development lifecycle a lot easier if you develop in a PDE.

3. Get an ISV / OEM Contract to legitimize your Salesforce.com Partnership

*WARNING*: This step takes a LONG TIME (read: MONTHS).

For those of you who are wondering why this is even necessary — why can’t I just build a managed package and then require that someone writes me a check before I give them an install link? — there are many reasons to go through the hassle. For one, the Salesforce.com Partner Program is pretty rocking — they have killer tools that you’re not going to get going with other partners like Zoho, who haven’t had the time to build up their Partner Programs as well as Salesforce.com has.  Second, you’ll find that unless you’re just writing a few apps for your company or small group of clients, the whole hand-out-install-links method just doesn’t work. At a base level, there’s no way to manage licenses, see who’s using your app, or distribute using Salesforce’s public channels (e.g. AppExchange). Basically: you NEED to go through this process, even though it’s painful. There’s some serious awesomeness at the end of the tunnel: dive in, and be patient. You’ll get through it.

Salesforce really has two Partner models, or ways you can sell your apps, described very well here: AppExchange Partner Programs and Models.

  • ISVForce (the ISV program) – you sell apps meant to be installable into most any Salesforce org. Often these are general purposes utility apps, like data cleansing apps, integration tools, user interface development tools (like Skuid!), mapping apps, etc. Under this model, existing Salesforce users can be licensed to use your ISV app in addition to whatever other apps they currently have and in addition to their existing Salesforce CRM license.
  • Force.com Embedded Edition (the OEM model) – this is for Partners who want to leverage the Force.com platform – e.g. the cloud platform, logins / roles / profiles / security, object building – everything BUT CRM Objects and Service Cloud, pretty much – as a base for an industry-specific offering, e.g. a student information system. Under this model, users come to YOU for a Salesforce license, and you give them a Platform License + your Application. They can’t access CRM Objects (e.g. Opportunities, Products) but they can access Accounts, Contacts, and the Force.com platform.

Basically, once you’re officially “in” to the Partner Program, you get all kind of power to develop killer apps, sell and manage licenses, and support your customers once they’ve subscribed.

But, getting back to the process, the first HUGE thing you get by finishing out your ISV / OEM contract is an Enterprise Edition CRM org with 2 free licenses (you can pay for more as your company grows). This is referred to variously as your CRM for Partners org, your  License Management Organization (LMO), your partner Business Org, and, as we’ll see later, it is recommended that this be your AppExchange Publishing Org (APO) as well, but it does not have to be — from here on out, I’ll refer to it as either your CRM org or your LMO, as well as your APO. In almost all cases, each company will only have one CRM/LMO org, which will be the place where you manage, for all of your company’s AppExchange applications/packages (some partners sell many different apps!), package versions, licenses, and customer relationships (e.g. Leads, Accounts, Contacts, Opportunities, etc.). Note: you MUST have an ISV / OEM Contract in place with Salesforce.com for them to provide you with this org with its two free licenses.

Once you have this org — note, you do NOT have to request this free org, you can pay full price for all of your users, but why not start with it?), immediately log a case in the Partner Portal to “Request the LMA” — this case asks Salesforce to install a special “License Management Application” (LMA), a managed package, into your CRM org. Once this is installed, congratulations, your CRM org has now also become your License Management Organization (LMO). This org will now be the place where all information about customer interactions with your apps/packages is sent, and where you should manage it. WITHOUT this app, you will be unable to use Salesforce’s license-management tools to keep track of who is doing trials of your apps, activating them once they’ve paid, adding additional licenses to orgs, using the Partner Black Tab / Subscriber Support functionality, etc.

The benefits of having this org are huge: all activity related to client interaction with the AppExchange are tracked in this org — if a client does a demo of your product, a new Lead is generated with Lead Source appropriately set to indicate that a Demo was done. If they do “Get it Now”, another Lead is generated with Lead Source set to “Package Installation”, and a new License record is created in this organization, TIED to the Lead. Once this Lead can be converted, the License records move right along into the Account and Contact, and you can do workflows on the Expiration Date / Install Date, etc.

So, for those of you stuck in Step 3, pulling your hair out, waiting for it to get finished, here are some ideas:

  1. Keep developing your app! Make it super-rocking.
  2. Along the way, get your app prepped for Security Review — put it through the the Checkmarx Security Scanner (for native apps), and all of the other security guidelines available over at security.force.com.
  3. Get involved with the Salesforce Community, whether on the LinkedIn groups, Twitter, User Groups, Developer Boards, Salesforce StackExchange, etc. — build up your non-AppExchange distribution channels.
  4. Make some help files. A great way to do this is to leverage BlueMango’s ScreenSteps technology.
  5. Take your release notes training to stay current and take advantage of upcoming features.
  6. Test your app (see Step 7) in different Salesforce.com editions, pushing out beta packages to make sure you don’t include any features that disqualify your app for inclusion in Group / Professional Edition. This can be a real lifesaver later on if you were hoping for your app to be available to these Edition types! Basically, any references to features / API objects not available to GE/PE subscribers will disquality your app from being installed into GE/PE orgs. You can often avoid these disqualifications through use of dynamic SOQL and DML, but always push out Managed Beta versions of your app and try to install them into test GE/PE orgs to make sure you didn’t miss something!

4. Link your orgs together via AppExchange

Once you have your two main orgs (1. your LMO / CRM org 2. your Managed Package Development Org), and have uploaded at least one Managed Released version of your managed package, login to the AppExchange using your Managed Package Development org credentials, click on your name in the top right corner, and click on the “Publishing Console” drop-down option.

You may be asked, immediately, to specify the credentials of the org where you want to manage licenses for your application. **Enter the credentials of your LMO/CRM org**.  At this point, a “Package” record and “Package Version” record(s) will be created in your LMO corresponding to the Package and Package Versions developed in your APO(s). If someone installs one of these Package Versions, new Lead and License records will be generated in your LMO corresponding to the installed Package Versions. NOTE: This can all happen WITHOUT having a Public AppExchange listing. You can use the LMO to activate licenses, extend trials (up to 90 days), enable Site licenses, or deactivate / suspend license records.

NOTE: Never delete License records from your LMO!!! You can’t recreate them – special magic happens on the backend that generates them, and this is non-reversible.

Now, we’re going to establish which org in our org network will play the role of AppExchange Publishing Organization / APO. To do this, go  on the “Publishing Console” area. From the main screen, click on “Your Organizations”.

YourOrganizations

You may be asked, immediately, whether your organization has done any AppExchange publishing before. If you click “Yes”, you will be asked to enter the credentials of your “AppExchange Publishing Organization / APO“. **Enter the credentials for your LMO / CRM org.** By doing this, you will define your LMO / CRM org as your APO, meaning that it is now the central place for managing all of your company’s AppExchange apps. This is recommended. It is possible for your APO to be your Managed Package Development org, but NOT recommended. If you need to change your APO, you typically are able to simply by clicking “Change my AppExchange Publishing Organization”, but if you already have a Public AppExchange release, this will not be possible – so you’ll need to  log a case in the Partner Portal, and they’ll change it for you.

Next, **link your Managed Package Development organization** by clicking “Link new Organization”. Enter the credentials of each Managed Package Development organization whose AppExchange listings and Licenses you’d like to manage through your LMO/CRM org’s umbrella.

APO_Organizations

Once this is done, you’ll be able to, by logging in to AppExchange with your LMO/CRM/APO credentials, see and manage all of your company’s AppExchange listings which you have linked to your APO! (Note: your CEO will like this! No logins to DE orgs needed, and single-sign on views are NICE. Very nice.) Just repeat this process every time you create a new AppExchange application, associating your managed package development orgs to your APO.

5. (Optional – mainly for ISV’s) Create an AppExchange listing

Login to AppExchange using your LMO/CRM/APO credentials, then go to the “Publishing” tab. Create a new Private listing corresponding to each of your Managed Apps. Part of your Private listing settings require you to choose a managed-released version of your app to associate with your package.

In order for the Listing to be changed from Private (meaning only you can see it) to Public (meaning that anyone can search AppExchange for it), your application will need to pass the Salesforce.com Security Review. To do this, click on the Package Version’s “Start Review” action. This will initiate a review process — you’ll get sent an email with a Security Review Questionnaire to fill out (not bad), which you’ll have to submit through the Partner Portal.

Once your app is Security Reviewed (usually takes 2-3 weeks for native apps if your Checkmarx report is error-free, but if you get hit for something, you’ll have to do it all over again, i.e. another 2 weeks), you’ll be able to make your app’s listing PUBLIC on AppExchange, and start earning revenue! Yay! Once customers start clicking “Get it Now”, new Leads and License records will be created in your LMO.

6. (For OEM’s using Trialforce) Create a Trialforce Management Organization

For OEM partners, who are using Force.com as a platform but are not using Salesforce.com CRM or Service Cloud objects, Trialforce is a massively powerful feature of the Force.com Partner Program. It basically lets you create your own trial environments, preconfigured exactly as you want with sample data, pre-installed packages (one or many!), metadata (e.g. reports, report types, custom settings data, public groups, you name it) — which will be used to create trial accounts for your prospective customers, saving them (and you) the hassle of having to set them up with an environment that really showcases your app in full working condition.

To start out with Trialforce, you need to create a Trialforce Management Organization (TMO): to start, create a DE / PDE org, and then go to the Partner Portal and log a case asking for this org to be made your TMO. Once this is enabled, a special “Trialforce” area will appear in the Setup menu below “Develop” and “Deploy”. This area allows you to spawn new Trialforce Master Orgs — these are the actual environments where you define the “trial templates”, from which trial orgs will be created for your customers. So to get this straight, you have ONE Trialforce Management Organization, for managing all of your one or many Trialforce Master orgs, which define the templates for your customer’s trial orgs. However, these Trialforce Management Orgs also serve another purpose, so let’s list out the two main uses of Trialforce Management Organizations:

  1. Managing all of your Trialforce Orgs: the TF Management Org allows you to create  one or many Trialforce Master Orgs, which are “template” orgs which allow you to define full Salesforce environments pre-configured with sample data and one or multiple managed applications, which users can use as starting points when they click “Try this application in a free trial environment” from AppExchange, or from a form on your company’s website (if you’d prefer not to go through AppExchange at all). You can push out multiple “Snapshots” of each Master org’s configuration, and you then can change which “Snapshot” is used to actually populate customer trials. There’s two ways to define which “Snapshot” is used:
    1. For AppExchange initiated trials: go to the Appexchange Publishing Console and select which Trial Template you’d like to use. Note: these templates have to be Security Reviewed. However, this process is generally very quick, a few days at most.
    2. For Company-website initiated trials: go to the Partner Portal, and log a case, specifying the new Trialforce snapshot to use.
  2. Managing your Custom Branding. From the Trialforce Management Organization, you can define Custom Email Branding to use with each of your Trialforce Master Orgs and define (through a slick builder) a Custom Login Site that users who started a trial using your Trialforce orgs will see INSTEAD of the default Salesforce login page.

7. Create Partner Test Organizations

From the Partner Portal’s “Create a Test Org” area, you can also create specially configured test orgs to use for validating your package internally. These orgs have special privileges:

  1. You can write APEX in them, without a Sandbox or having to do Test Coverage.
  2. You don’t have to pay for them (a plus!)
  3. They have lots of different user licenses pre-configured, like Customer Portal, Partner Portal Gold, Salesforce Platform, etc., that you can use to experiment with these features / functionalities.
  4. You can get them in any Salesforce edition type: e.g. you can create a test Group Edition org, Professional Edition org, or Enterprise Edition org. This is very useful for testing compatibility of your package with various edition’s feature restrictions.
  5. YOu can request certain features to be activated in them for internal validation: e.g. you do NOT want to enable “Advanced Currency Management” in your managed package development org if you want your package to be installable into orgs that don’t need this, but you might want your package to play well with this feature. So, to test this, get a test org, and log a case to have this feature enabled, then install your package into it and see how it works.
  6. You can install Managed – Beta packages in them.
  7. They’re great for doing demos for Customers.
  8. They come with 6 Sandboxes, including one Full sandbox, so it’s easy to quickly do Quality Assurance of new package version functionality in a sandbox of your main Customer Demo org
  9. They have a decent amount of data space — more than your DE orgs will — so you can do load testing.

8. Support your customers

As either an ISV or an OEM, once you’ve got customers who have paid for your app, either through AppExchange Checkout / Recurly or through checks in the mail / credit card / direct deposit, and you’ve activated their licenses (through your LMO), you’ll want to be able to help your customers resolve any problems they may be having. For this, the Salesforce Partner Program has a great tool called the “Subscriber Support Console”, which comes along with the LMA and as such is automatically in your LMO. From the “Subscriber Support” tab included in this app, you can see ALL customers who have installed your application – getting a summary of their org and what packages from your company they have installed, whether those packages are active / expired, when they expire, what version their on, and more!

PLUS, your customers will actually be able to Grant Login Access to your company, in addition to Salesforce.com Support and their admins, so that you can, through the Subscriber Support tab, actually log-in to a customer’s org exactly AS a customer sees it, and make changes – this is known as “Partner Black Tab” access. You will actually get special privileges – for instance the ability to see Protected Custom Settings and set special Sharing Rules – that your customers cannot see on their own.

9. Upgrade / iterate your app

Your app is probably not going to stay the same, so how do you get your existing customers onto newer versions? And what if only certain customers should be upgraded, or if you want to try out upgrades / new features on certain beta-tester customers before all folks should get it?

Fortunately, Salesforce has tools for this as well. From your managed package development org, go to Create > Packages > “Your Package Name”, and click on “Versions”. Then, click on the “Push Upgrades” button. If you haven’t yet, you’ll need to go to Partner Portal to request access to this feature set. Once it’s enabled, though, you can do the following:

  • Create Patch Development Orgs: “patch orgs” are like “branches” of a given Managed Released version (e.g. 1.24) of your package that customers may have installed. Each patch org, like their parent managed package development org, can actually create/upload package versions, however only certain components can be modified in patch orgs — no new fields, objects, or components can be added, but existing components, such as Apex code, can be modified to some extent. You can then PUSH these Patch versions to selected / all customers who are on the parent branch (e.g. you may develop a patch 1.24.1 – and then all customers on 1.24 are eligible to upgrade to 1.24.1)
  •  Push Major Upgrades: as of Winter 13, you can actually push Major Releases of your package to customers as well – so customers on 1.6 can be upgraded to 1.24 directly, automatically, in the middle of the night, with no action on your end other than scheduling it to take place. However, this possibility is NOT to be taken lightly: tread carefully, following these guidelines.
  • Manually Upgrade customer using install links: you can always take the install link from either a Patch version or a Major release version and manually install a package into a customer’s org by logging in to that org (if you have access), or having the customer log in to that org and pasting the link into the URL.

SUMMARY

Anyway, hope this was helpful. To review, do you know what these acronyms stand for now?

  • DE – a Developer Edition org, where you can develop / experiment with Salesforce functionality
  • PDE – Partner Developer Edition – a special “super-sized” development edition type appropriate for AppExchange application development
  • LMA – the License Management Application. Each Partner should have ONE org, typically their CRM for Partners org, that has this installed in it. Whatever org has this app installed is dubbed the “License Management Org” / LMO.
  • LMO – License Management Organization / Partner CRM org – where you track customers and package licenses, and support customers
  • APO – AppExchange Publishing Organization – AppExchange terminology for the org that you would like to be the AppExchange listing management hub for all of your company’s AppExchange apps/listings. As a best practice, make this your LMO/CRM  org. This will allow you to manage all of your AppExchange listings from one place.
  • TMO – Trialforce Management Organization – where you manage your company’s custom branding and create/manage your Trialforce Master Orgs
  • ISV – a Salesforce partner who is part of the ISVforce program. Typically these partners build apps for AppExchange distribution, to fit in to any Salesforce org.
  • OEM – an “Original Equipment Manufacturer”, but in the Salesforce world, a Partner who uses the Force.com Embedded Edition distribution model, selling Force.com Licenses (technically “Salesforce Platform” / AUL licenses) “embedded” in their application’s cost.

In Winter 12, Salesforce very quietly, and without any fanfare, introduced a capability to Apex that has been something of a holy grail to insane Force.com geeks like myself: the beginnings of support for Reflection in Apex. Which intrepid explorer blazed the trail that led to this prodigious capability? For those of you who keeping score, our silent hero’s name is not Indiana, but Jason. Or, more precisely, JSON.

This is a bit of a historical post, meant to pave the way for a more radical post coming later this week (stay tuned!). It describes events / features that have occurred / been introduced over the course of the past year, but whose full significance is only now being discovered by many Force.com developers. Understanding these developments will be a crucial preliminary to reading my upcoming post on an extension of Tony Scott’s trigger pattern to allow for scalable, extensible triggers usable by managed package developers / ISV’s (there, I let the cat out of the bag!)

Ancient History: Native JSON Support lands in Apex

In Winter 12, Salesforce added native support for JSON to Apex, meaning that you can, in just 2 lines of code convert (nearly) any Apex class/object into a JSON-formatted String, and then convert it back from a String into the proper Apex class/object. For instance:


Map<String,String> params = new Map<String,String>{
    'foo' => 'bar',
    'hello' => 'world'
};
// Serialize into a JSON string
String s = JSON.serialize(params);
// --> yields '{"foo":"bar","hello":"world"}

// Deserialize back into an Apex object

// Define the Type of Apex object that we want to deserialize into
System.Type t = Map<String,String>.class;

params = (Map<String,String>) JSON.deserialize(s,t)

The addition of support for JSON saved ISV’s the immense hassle (and extremely-limiting obstacle) of implementing their own JSON serialization classes, which could very quickly consume all of the 200,000 Script Statements Governor Limit. So needless to say, this alone made native JSON support a godsend to Apex developers.

The Tip of the Iceberg: System.Type and Apex Interfaces

But notice that little System.Type class that we instantiated prior to deserialization. As the Apex development team was building in JSON support, they laid the foundation for something even bigger – an Apex way to achieve what’s known as “Reflection”, or dynamic instantiation/execution of classes/methods based on name. For developers looking for a parallel in Force.com, this is similar to creating a new Sobject by getting a Schema.SObjectType token from the Global Describe Map using an arbitrary String (e.g. ‘Account’) as the map search key, and then calling the newSObject() method. In this scenario, you could have received the dynamic key (‘Account’) from anywhere — from a text field in a custom object in your database, from a JavaScript prompt in the client, etc., and run totally dynamic logic (the instantiation of an arbitrary SObject type) based on that String value.

How does this Schema example connect to Trigger Patterns? Well, the reason that the dynamic SObject instantiation from Schema data works is that each Custom and Standard Object in your database is an instance of an interface called “SObject”. An interface is a definition of methods that all implementations of that interface (such as Account, Contact, MyCustomObject__c) must support. This is why can cast any Account, Contact, or custom object record into an SObject and then get access to some of the dynamic methods that the SObject class supports, such as getField(String fieldName) or getSObject() — because each Account, Contact, or CustomObject__c record is not just an instance of an Account, Contact, or CustomObject__c — it’s also an implementation of the SObject interface, which defines the methods getField(), getSObject(), etc.

With native JSON in Apex, you can now take this Reflection business a huge step further, thanks again to the magic of interfaces. Before we tackle custom interfaces, let’s consider an important standard Apex interface: Schedulable. The Schedulable interface looks like this:


global interface Schedulable {
    global void execute(SchedulableContext ctx);
}

The power of Apex classes that implement Schedulable is that they can be scheduled for execution at a later date/time using the System.schedule() method, like so:


// Get an instance of MyScheduledClass,
// which implements Schedulable
Schedulable c = (Schedulable) new MyScheduledClass();

// Schedule MyScheduledClass to be run later
System.schedule('CleanUpBadLeads','0 0 8 10 2 ?',c);

Now, for those of you who were excited about Skoodat Relax, you may have just realized how it all works, and you are absolutely right! One of the features of Relax is its ability to store the name of a Scheduled Apex Class, and the CRON schedule that dictates when it should be run, in a custom Object called Job__c in Salesforce, and then dynamically activate, deactivate, and run the desired classes at the desired times. And the not-so-secret secret (Relax is on GitHub) behind it is: it all works because of interfaces… and because of native JSON support.

Taking this a step further, as Relax does, we can define complex custom interfaces, with whatever methods we’d like, and then, when our Apex code is run, we can call these methods on whatever implementation of our interface we are handed:


// Our interface definition
public interface Runnable {
   public void run();
}
// A sample implementation
public class DestroyBadApples implements Runnable {
   public override void run() {
       delete [select Id from Apple__c where CreatedDate <= :Date.today().addYears(-365)];
   }
}

// A sample use of this interface
public class RunStuff {
   public RunStuff() {
      // Define the stuff we'd like to run
      List<Runnable> stuffToRun = new List<Runnable>();
      stuffToRun.add(new DestroyBadApples());

      // Run the stuff
      for (Runnable r : stuffToRun) r.run();
   }
}

Part 3, in which we catch the scent of our prey: Dynamic Class Instantiation with JSON

With Apex support for interfaces, developers already had, prior to Winter 12, one of the two key tools they needed that would make possible the Apex Holy Grail of dynamic method calls / code execution. The missing tool was the ability to create an implementation of an interface without knowing the class’ name beforehand. In our previous examples, notice how we had to hardcode the name of the implementations, e.g. MyScheduledClass and DestroyBadApples.

With Winter 12, our humble hero JSON gave us that missing tool. How so? Well, the JSON.deserialize(string,type) method allows us to deserialize any String into any supported Apex Type, as defined by Apex Classes which have a corresponding representation as a System.Type, a little-noted System class that crept in with Winter 12. The key tricks that turned System.Type into a Holy Grail as early as Winter 12 were

  1. A little trick involving an empty JSON String, e.g. “{}”
  2. System.Type.forName(name)

Combining these two tricks with interfaces, and voila! We have Dynamic Class Instantiation in Apex:


// A sample use of this interface
public class JobRunner {
   public static void ScheduleJobs() {
      // Retrieve the jobs we'd like to schedule from a custom object
      for (Job__c j : [select JobNumber__c, Cron__c, ApexClass__c from Job__c]) {
          try {
              // Get the Type 'token' for the Class
              System.Type t = System.Type.forName(j.ApexClass__c);
              if (t != null) {
                 // Cast this class in to a schedulable interface
                 // (if the cast fails, we'll get an error)
                 Schedulable s = (Schedulable) JSON.deserialize('{}',t);
                 // Schedule our Schedulable using the stored CRON schedule 
                 System.schedule('JobToRun'+JobNumber__c,Cron__c,s);
              }
          } catch (Exception ex) {}
      }
   }
}

Part the 4th, in which our hero’s achievements are acknowledged but his powers not increased

In Summer 12, Salesforce “officially” acknowledged its intentions with System.Type and added a newInstance() instance method allowing for less hackish dynamic class instantiation by name. However, the JSON method remains far more powerful, as it allows us to dynamically populate all manner of fields/properties on our dynamically-instantiated objects — using industry-standard JSON syntax, allowing this dynamic population to be initiated client-side and completed server-side in Apex! If your mind is not spinning with all of the possibilities right now, I’m sure it either did so already during the past 12 months, or will start doing so very soon, particularly when we talk about applying this capability to Triggers (more propaganda for the next post!)

So where can Salesforce go from here? Well, there is still no direct way to dynamically execute a particular method on an Apex class without hard-coding the method’s name. Support for this would take Reflection in Apex to a whole new level. For now, though, well-crafted Interfaces can usually get around this limitation. The real meat of the work was done early last year when Salesforce R&D was “snowed over” in development for Winter 12, and for the fruit of those 4 months, the Force.com Development community should be immensely thankful.

Of all the killer Summer 12 improvements to the Force.com Platform, the one we at Skoodat were most excited about was Post-Install and Uninstall Apex scripts. In a nutshell, they allow developers of Force.com Managed Packages (more posts coming on what these are and how to use them!) to execute arbitrary Apex Code every time that a User installs or uninstalls a version of the developer’s package. This feature is directly targeted at developers’ needs to perform various “setup” and “tear-down” tasks to help prepare customers’ orgs to use their new packages.

What sorts of tasks fall under this category? Here are a few of the most common:

Setup / On-Install Tasks

  • Create/update records of “configuration” objects
  • Create/update records of List Custom Settings or Organization-Default Hierarchy Custom Settings
  • Create sample data (for Trials)
  • Send “Welcome” Emails to the User who installed the package
  • Notify an external system (using a callout or email service)
  • Kick off a batch process (for initializing fields, modifying fields to fit new API’s, etc.)

Teardown / On-Uninstall Tasks

  • Delete sample data
  • Send “Thanks for using our Package” Emails to the User who uninstalls the package
  • Notify an external system (using a callout or email service)

Prior to this feature, developers would use various “hack” strategies for achieving this sort of behavior, including:

  • Setup Page
    • Strategy: The default tab in your app is a “setup” page that runs needed routines either on page load or asynchronously in response to user button clicks
    • Problems:
      • Users may not go to this page
      • DML is not permitted in page constructors, forcing asynchronous post-load behavior
      • Button click behavior often requires lots of custom JavaScript Remoting calls or VF rerenders
      • Wastes users’ time getting to the real meat of your app
  • Runtime Detection
    • Strategy: When custom triggers / VF pages in your app are run, your app looks to see if needed setup tasks have been performed, and runs setup routines as appropriate
    • Problems:
      • Makes your app run slower — this logic has to be run every time your trigger / page is run
      • DML is not permitted in VF page constructors, so any insertions of Custom Setting or custom configuration object records will have to occur post-load / asynchronously, which can really throw off page functionality

With the problems of the hack approaches in mind, Apex Install Scripts appeared to be a total godsend! And, largely, they are indeed.

So how does one use this tool? Well, the first step is to have a Managed Package. Once you have a Managed Package, you need to create a new class that implements the Install Handler interface. The docs have a basic example of how to do this. Then, edit your package:

You will then see two fields, one for specifying a Post Install Script, and another for specifying a script to run when your package is Uninstalled by a User. In the lookup, you will be prompted to select a class which implements InstallHandler. Be sure your class has COMPILED before trying this — otherwise it may not show up in the list of available classes.

Now, release a Managed – Beta version of your package (so that your InstallHandler  implementation class does not get locked down in to your package until you are ready). Once you have the install link, install your package into a Sandbox or Partner DE or Test Org (the only places where Managed – Beta packages can be installed). As soon as you are finished with the 2-3 step install instructions, your Post Install Script will be run. To run your Uninstall Script, uninstall this package.

Pretty cool stuff!

The Phantom of the Install Opera

As we at Skoodat dived  in to using Install Scripts, we encountered some mysterious, strange, sometimes phantom behaviors, which aroused some pretty basic questions in our minds as to the nature and functionality of Install Scripts — none of which are discussed in the Install Handler interface documentation:

  • Who do Install Scripts execute as?
    • What User?
    • What Profile?
    • Based on Source (DE) Org, or Destination Org?
  • What code/resources in the Destination Org do Install Scripts have access to?
    • Global methods in Apex Classes in Managed Apps that the Install Script’s app extends?
    • What packages is
  • Which Namespaces / Managed Apps do Install Scripts have access to?
    • All apps? Just the installer’s app? Or any Site Licensed apps?

In the absence of documentation, the only way to find out was to either ask questions of the SFDC community, or find out through experimentation. I did both, and here are some things I discovered:

  • Install Scripts execute as a totally unique User and Profile
    • Neither the User nor Profile exist in either Source or Destination orgs.
    • This User / Profile has essentially ‘God’ / System privileges
    • This User / Profile can create / modify records of Standard objects as well as Custom Objects / Custom Settings that come with the package being installed.
  • Do NOT annotate your Install Script, or any classes it calls (such as Batch/Scheduled Apex) as “with sharing”
    • YES: public class MyInstallScript implements InstallHandler
    • YES: public without sharing class MyInstallScript implements InstallHandler
    • NO: public with sharing class MyInstallScript implements InstallHandler
  • DML operations can fail if they are not initiated from the class which implements the InstallHandler interface
    • A DML operation executed from a helper class will fail due to SObjectExceptions such as “Field Name is not accessible”
    • *Simply extracting this code back in to the class which implements InstallHandler can avoid this issue*
  • Describe / Schema / Permissions info is WRONG / MISLEADING from the InstallScript context
    • Schema permissions (e.g. Object or Field Accessibility) always returns FALSE
      • Example: Calls such as Contact.Name.getDescribe().isAccessible() will universally return FALSE regardless of Profile permissions in either Source or Destination org
    • Profile / PermissionSet data is WRONG
      • Example: PermissionsModifyAllData returns FALSE on the Install Script User’s Profile / PermissionSet, but the InstallScript CAN modify all data!
    • Consequences: Do NOT try to dynamically determine whether to permit DML based on permissions from within Install Scripts. You will get very frustrated.
    • *Timeline for Resolution*: Summer 13 release (Safe Harbor — got this from SFDC support)
      • UPDATE 2/25/2013 — looks like this was NOT resolved in Spring 13, so I’ve changed to Summer 13 :(

If you’re wondering how I verified this information, here’s a Post Install Script I used to determine this information. Basically it spits out some Describe information on some standard objects (Account and Contact) as well as a custom object in the source developer org (relax__Job__c). I also spit out the username and profileId of the running user, by which I discovered that the running user is in fact a “phantom” with all the powers of System. Here’s the complete InstallScript. After assembling a debug string, it calls a helper method that emails me the content.


public class InstallScript implements InstallHandler {

	public void onInstall(InstallContext ctx) {

		String username = UserInfo.getUserName();
		String profileId = UserInfo.getProfileId();
		String debugString =
			'Username: ' + ((username != null) ? username : 'null')
			+ 'ProfileId: ' + ((profileId != null) ? profileId : 'null')
			+ ', Contact.Accessible: ' + Contact.SObjectType.getDescribe().isAccessible()
			+ ', Contact.LastName.Accessible: ' + Contact.LastName.getDescribe().isAccessible()
			+ ', Account.Accessible: ' + Account.SObjectType.getDescribe().isAccessible()
			+ ', Account.Name.Accessible: ' + Account.Name.getDescribe().isAccessible()
			+ ', relax__Job__c.Accessible: ' + relax__Job__c..SObjectType.getDescribe().isAccessible()
			+ ', relax__Job__c.relax__Apex_Class__c.getDescribe().isAccessible(): ' + relax__Job__c.relax__Apex_Class__c.getDescribe().isAccessible();

		JobScheduler.SendDebugEmail(
			debugString,debugString,'Debug from Relax Install Script in org ' + ctx.organizationId(),'myname@mycompany.com'
		);
	}

	/////////////////
	// UNIT TESTS
	/////////////////

	private static testMethod void TestInstall() {
		InstallScript is = new InstallScript();
    	Test.testInstall(is, null);
    	Boolean b = false;
    	System.assertEquals(false,b);
	}
}

And… here is the debug output:

Username: 033e0000000h2ydias@00dd0000000bt2keaq
ProfileId: 00ed0000000SUxMAAW
Contact.Accessible: false
Contact.LastName.Accessible: false
Account.Accessible: false
Account.Name.Accessible: false
relax__Job__c.Accessible: false
relax__Job__c.relax__Apex_Class__c.Accessible: false

As you can see, Schema methods universally return false, and a quick trip to SOQLExplorer will reveal that neither the User or Profile executing this Script exists in either Source or Destination org.

Making the Switch

So, bugs and weirdness aside, Install Scripts can still do some rocking things. At Skoodat, we’ve used them to populate certain configuration objects from JSON/XML stored in Static Resources included with our package, as well as to insert default custom settings. Here’s an InstallScript that would tackle the second of these:


public class InstallScript implements InstallHandler {

   public void onInstall(InstallContext ctx) {

      // Create an Org-Default record
      // of a Hierarchy Custom Setting
      // included in our package
      insert new MySetting__c(
         // Makes an Org Default setting
         SetupOwnerId = UserInfo.getOrganizationId(),
         SettingField__c = 'JediWarrior';
      );

   }

   /////////////////
   // UNIT TESTS
   /////////////////

   private static testMethod void TestInstall() {

      // Delete any Org-Default Custom Settings
      delete [select Id from MySetting__c
         where SetupOwnerId = :UserInfo.getOrganizationId()];

      // Run our post-install script,
      // simulating a no-prior version installed scenario
      InstallScript is = new InstallScript();
      Test.testInstall(is, null);

      // Verify that an Org-Default setting was created
      MySetting__c c = MySetting__c.getOrgDefaults();

      System.assertNotEquals(null,c);
      System.assertEquals('JediWarrior',c.SettingField__c);

   }
}

So much easier than making a custom setup page, and it happens instantly!

We’re definitely looking forward to more improvements to this feature in Winter 13 and beyond, especially when we can have public implementations of global interfaces. Moreover, Winter 13 will include some sweet new methods for working with Static Resource data in Tests, so I may just be convinced to post an example of creating custom config object records in an InstallHandler from data stored in Static Resources as JSON/XML — so keep posted!

UPDATE 6/25/2014

I recently stumbled upon a post by Matt Bingham on Salesforce StackExchange where he documents that there is a huge difference between marking your Install Script as without sharing , e.g.

public class MyInstallScript implements InstallHandler {

}

is NOT the same as

public without sharing class MyInstallScript implements InstallHandler {

}

In fact, marking your class as without sharing grants it permission to

  • view all data
  • modify all data
  • interrogate system data (like CronTrigger and ApexClass)

I have also seen that there are some objects, such as the Chatter Group object (CollaborationGroup) that you can only interact with fully from Install Scripts in this “super-user” mode.

Regarding inaccurate PermissionSet / Profile data, I have also run into another frustrating wrinkle: the Install Script User does have a Profile, but this Profile is not queryable, and so any logic in your app that you have related to querying on the running user’s Profile or PermissionSetAssignments will NOT work, because (a) the User’s Profile record does not exist (you get the error “SObject has no rows for assignment”) and (b) the PermissionSetAssignments relationship on User is not visible from the InstallScript context (you get the error “Didn’t understand relationship PermissionSetAssignments”). Therefore, you’ll have to write special logic related to the scenario where the User’s Profile record does not exist, to avoid query exceptions.

 

We at Skoodat are considering exposing the platform for we use for rapidly building and deploying killer custom UI’s on Force.com — aka Skoodat Skuid. But we’re looking for your feedback: are we crazy, or does this totally excite you? Let us know!

Skoodat Skuid — Wicked cool UI platform for Force.com, or not?

**Quick check to make sure this post is worth 5 minutes of your precious work day:**

If you have ever wanted:

(1) To run long chains of Batch Apex jobs in sequence without using up scheduled Apex jobs
(2) To run batch/scheduled Apex as often as every 5 minutes every day
(3) To manage, mass-edit, mass abort, and mass-activate all your scheduled and aggregable/chained Apex jobs in one place, in DATA
(4) To avoid the pain of unscheduling and rescheduling Apex jobs when deploying

Then keep reading :)

A recent (well, maybe not so recent — is it June already???) blog post by Matt Lacey really struck a chord with me. Matt highlights one of the biggest developer woes related to using Scheduled Apex — whenever you want to deploy code that is in some way referenced by an Apex Class (or classes) that is/are scheduled, you have to unschedule all of these classes. With Spring 12, Salesforce upped the limit on the number of Scheduled Apex Classes from 10 to 25 (ha–lellujah! ha–lellujah!). However, with 15 more scheduled jobs to work with, this have-to-unschedule-before-deploying problem becomes even more of a pain.

But this isn’t the only woe developers have related to asynchronous Apex. An issue that has shown up a lot lately on the Force.com  Developer Boards and LinkedIn Groups is that of running Batch Apex jobs in sequence — run one batch job, then run another immediately afterwards. Cory Cowgill published an excellent solution for accomplishing this, and this has been the go-to method for linking Batch Apex jobs ever since. But one of the problems with his method is that it leaves a trail of scheduled jobs lying in its wake — seriously cluttering your CronTrigger table. If you want to run these batch process sequences often — e.g. kicking them off from a trigger, or running them every day (or even every hour!), you have to start “managing” your scheduled jobs more effectively.

A third issue often cited by developers is the frequency at which jobs can be run. Through the UI, a single CronTrigger can only be scheduled to run once a day. Through code, you can get down to once an hour. If you wanted to, say, run a process once every 15 minutes, you’d have to schedule the same class 4 times — using up 4/25 (16%) of your allotted scheduled jobs — and you have to manage this through Apex, not the UI.

As I mulled over these issues, I thought, there’s got to be a better way.

There is.

Enter Relax

Relax is a lightweight app I’m about to release, but before I do, I’m throwing it out for y’all to try as a public beta. (The install link is at the end of the post, but restrain yourself, we’ll get there :) )

Broadly speaking, Relax seeks to address all of these challenges, and a whole lot more.

At a high level, here are a few of the things it lets you do:

  1. Manage all your individually-scheduled Apex jobs in data records (through a Job__c object). Jobs can be mass scheduled and mass aborted, or mass exported between orgs, and then relaunched with clicks, not code.
  2.  Create and schedule ordered Batch Apex “chains” of virtually unlimited length, with each “link” in the chain kicking off the next link as soon as it finishes. And all of your chained processes are handled by, on average, ONE Scheduled Job.
  3. Schedule jobs to be run as often as EVERY 1 MINUTE. ONE. UNO.
  4. Run ad-hoc “one-off” scheduled jobs without cutting into your 25
Let’s jump in.
 
Individual vs. Aggregable
In Relax, there are 2 types of jobs: individual and aggregable. You create a separate Job__c record corresponding to each of your Individual jobs, and all of the detail about each job is stored in its record. You can then separately or mass activate / deactivate your jobs simply by flipping the Active? checkbox. Each Individual job is separately scheduled — meaning there’s a one-to-one mapping between a CronTrigger record and an Individual Job__c record. Here’s what it looks like to create an Individual Job. You simply specify the CRON schedule defining when the job should run, and choose a class to run from a drop-down of Schedulable Apex Classes.

Aggregable jobs, on the other hand, are all run as needed by the Relax Job Scheduler at arbitrary increments. For instance, you could have an aggregable job that runs once every 5 minutes that checks for new Cases created by users of your public Force.com Site, whose Site Guest User Profile does not have access to Chatter, and inserts Chatter Posts on appropriate records. You could have a job that swaps the SLA of Gold/Bronze SLA Accounts once every minute (contrived, yes, but OH so useful :) Or you could have a series of 5 complex batch apex de-duplication routines that need to be run one after the other, set them all up as separate aggregable jobs, assign orders to them, and have the entire series run once every 15 minutes, every day. Here’s what the SLA swapper example looks like:

How do I use it?
 What do you have to do for your code to fit into the Relax framework? It’s extremely simple. For Individual Jobs, your Apex Class just needs to be Schedulable. For Aggregable Jobs, there are several options, depending on what kind of code you’d like to run. For most devs, this will be Batch Apex, so the most useful option at your disposal is to take any existing Batch Apex class you have and have it extend the “BatchableProcessStep” class that comes with Relax:

// relax's BatchableProcessStep implements Database.Batchable<sObject>,
// so all you have to do is override the start,execute,and finish methods
global class SwapSLAs extends relax.BatchableProcessStep {

	// Swaps the SLA's of our Gold and Bronze accounts

        // Note the override
	global override Database.Querylocator start(Database.BatchableContext btx) {
		return Database.getQueryLocator([
                     select SLA__c from Account where SLA__c in ('Gold','Bronze')
                ]);
	}

	global override void execute(Database.BatchableContext btx, List<SObject> scope) {
		List<Account> accs = (List<Account>) scope;
		for (Account a : accs) {
			if (a.SLA__c == 'Gold') a.SLA__c = 'Bronze';
			else if (a.SLA__c == 'Bronze') a.SLA__c = 'Gold';
		}
		update accs;
	}

        // The ProcessStep interface includes a complete() method
        // which you should call at the end of your finish() method
        // to allow relax to continue chains of Aggregable Jobs
	global override void finish(Database.BatchableContext btx) {
		// Complete this ProcessStep
		complete();
	}

}

That’s it! As long as you call the complete() method at the end of your finish() method, relax will be able to keep infinitely-long chains of Batch Jobs going. Plus, this framework is merely an extension to Database.batchable — meaning you can still call Database.executeBatch() on your aggregable Batch Apex and use it outside of the context of Relax.

Relax in action

In our org, we have 4 jobs set up: 1 individual, 3 aggregable. To kick them off, we select all of them, and change their Active? field to true. Here’s what it looks like after we’ve mass-activated them:

And here’s what the Scheduled Jobs table (accessible through the Setup menu) looks like. Since Case Escalator was set to be run individually, it has its own job. Then there’s a single “Relax Job Scheduler”, which runs every 5 minutes (I’ll probably drop this down to every 1 minute once I officially release the app), checking to see if there are any aggregable Jobs that need to be run, and running them:

Every time Relax Job Scheduler runs, the very first thing it does is schedules itself to run again — regardless of what happens during any processing that it initiates. It queries the “Next Run” field on each Active, Aggregable Job__c record that’s not already currently being run, and if Next Run is less than now, it queues it up to be run as part of a Relax “Process”. Each Process can have an arbitrary number of ProcessSteps, which will be executed sequentially until none remain. If both the current and next ProcessSteps are BatchableProcessSteps, Relax uses a “Process Balloon” to keep the Process “afloat” — essentially launching a temporary scheduled job that is immediately thrown away as soon as the next ProcessStep is begun.

One-off Jobs

Another powerful feature of Relax is the ability to effortlessly launch one-off, one-time scheduled jobs, without having to worry about cluttering the CronTrigger table with another scheduled job. It takes just 1 line of code, AND you can specify the name of the class to run dynamically — e.g. as a String! Reflection ROCKS!!!

// Schedule your job to be run ASAP,
// but maintain the Job__c record so that we can review it later
relax.JobScheduler.CreateOneTimeJob('myNamespace.AccountCleansing');

// Schedule your job to be run 3 minutes from now,
// and delete the Job__c record after it's run
relax.JobScheduler.CreateOneTimeJob(
    'SwapSLAs',
    Datetime.now().addMinutes(3),
    true
);

Try it out for yourself!

Does this sound awesome to you? Give it a test run! Install the app right now into any org you’d like! Version 1.1 (managed-released)

Please send me any feedback you may have! What would make this more usable for you? I’ll probably be releasing the app within the next month or so, so stay tuned!

When I hear the words “Reports” and “Managed Packages” in the same sentence, I involuntarily let out a grunt of displeasure. Ask any seasoned ISV, and I guarantee you that the same sour taste fills their mouths. Why? Well, here’s the classic problem: An ISV includes some Reports in their managed package. Now, a common trick for making Reports “dynamic” is to leave one of the Filter Criteria blank and then have its value passed in through query string parameters using the “pv<n>” syntax, where n is the parameter you’d like to populate. For example, in this report of Enrollments at a given School, parameter 2 (0-indexed) is left BLANK:

Then, if we load up this page with query string parameter “pv2″ set to the name of a School, like so:

the value that we passed in will be dynamically inserted into the 2nd Filter Criterion, and we’ll have a report on Enrollments at Lincoln High School:

This is awesome, right? Quick, let’s throw a Custom Link on the Account Detail Page called “Enrollments” that links to this report, but passing in the Id of the Report! Yeah, yeah, yeah! I love this!

Hold your horses, partner.

This is where the ISV’s hang their heads in sadness… sorry son, it just ain’t that easy.

What’s the matter, grandpa? Come on, this is child’s play!

Not quite.

Notice where we said that we’d be passing in the ID of the Report. Hard-coded. For an ISV, ID’s are the ultimate  taboo. Why? Well, sure, you can package up the Report and Custom Link. But as soon as you install the package into a customer’s org, the Id of the Report has CHANGED — and the link is BROKEN. It’s a situation that the typical one-org Admin will never have to face, but, well, welcome to the world of the ISV.

Isn’t there one of those handy Global Variables which lets you grab the Name or DeveloperName of a Report?

Nope, sorry partner.

So, what DO you do?

Well, you write a ‘ViewReport’ Visualforce page that takes in the API name of a Report — which does NOT change across all the orgs that a package is installed into — and uses this API name to find the ID of the Report and send you to it. What does this look like?

The Visualforce is simple — one line, to be exact:


<apex:page controller="ViewReportController" action="{!redirect}"/>

The Apex Controller is a little more interesting. Here’s the meat, with test code included that achieves 100% coverage (so you can start using it right away!!!):


public class ViewReportController {

    // Controller for ViewReport.page,
    // which redirects the user to a Salesforce Report
    // whose name is passed in as a Query String parameter

    // We expect to be handed 1-2 parameters:
    // r: the DEVELOPER name of the Report you want to view
    // ns: a salesforce namespace prefix (optional)
    public PageReference redirect() {
        // Get all page parameters
        Map<String,String> params = ApexPages.currentPage().getParameters();

        String ns = params.get('ns'); // NamespacePrefix
        String dn = params.get('dn'); // DeveloperName

        List<Report> reports;

        // If a Namespace is provided,
        // then find the report with the specified DeveloperName
        // in the provided Namespace
        // (otherwise, we might find a report in the wrong namespace)
        if (ns != null) {
            reports = [select Id from Report
                  where NamespacePrefix = :ns
                  and DeveloperName = :dn limit 1];
        } else {
            reports = [select Id from Report where DeveloperName = :dn limit 1];
        }

        PageReference pgRef;

        // If we found a Report, go view it
        if (reports != null && !reports.isEmpty()) {
            pgRef = new PageReference('/' + reports[0].Id);
            // Add back in all of the parameters we were passed in,
            // MINUS the ones we already used: ns, dn
            params.remove('ns');
            params.remove('dn');
            pgRef.getParameters().putAll(params);
        } else {
            // We couldn't find the Report,
            // so send the User to the Reports tab
            pgRef = new PageReference('/'
                + Report.SObjectType.getDescribe().getKeyPrefix()
                + '/o'
            );
        }

        // Navigate to the page we've decided on
        pgRef.setRedirect(true);
        return pgRef;

    }

    ////////////////////
    // UNIT TESTS
    ////////////////////

    // We MUST be able to see real Reports for this to work,
    // because we can't insert test Reports.
    // Therefore, in Spring 12, we must use the SeeAllData annotation
    @isTest(SeeAllData=true)
    private static void Test_Controller() {
        // For this example, we assume that there is
        // at least one Report in our org WITH a namespace

        // Get a report to work with
        List<Report> reports = [
            select Id, DeveloperName, NamespacePrefix
            from Report
            where NamespacePrefix != null
            limit 1
        ];

        // Assuming that we have reports...
        if (!reports.isEmpty()) {
            // Get the first one in our list
            Report r = reports[0];

            //
            // CASE 1: Passing in both namespace, developername,
            // and a parameter value
            //

            // Load up our Visualforce Page
            PageReference p = System.Page.ViewReport;
            p.getParameters().put('ns',r.NamespacePrefix);
            p.getParameters().put('dn',r.DeveloperName);
            p.getParameters().put('pv0','llamas');
            p.getParameters().put('pv2','alpacas');
            Test.setCurrentPage(p);

            // Load up our Controller
            ViewReportController ctl = new ViewReportController();

            // Manually call the redirect() action,
            // and store the page that we are returned
            PageReference ret = ctl.redirect();

            // We should be sent to the View page for our Report
            System.assert(ret.getURL().contains('/'+r.Id));
            // Also, make sure that our Filter Criterion values
            // got passed along
            System.assert(ret.getURL().contains('pv0=llamas'));
            System.assert(ret.getURL().contains('pv2=alpacas'));

            //
            // CASE 2: Passing in both just developername
            //

            // Load up our Visualforce Page
            p = System.Page.ViewReport;
            p.getParameters().put('dn',r.DeveloperName);
            Test.setCurrentPage(p);

            // Load up our Controller
            ctl = new ViewReportController();

            // Manually call the redirect() action,
            // and store the page that we are returned
            ret = ctl.redirect();

            // We should be sent to the View page for our Report
            System.assert(ret.getURL().contains('/'+r.Id));

            //
            // CASE 3: Passing in a nonexistent Report name
            //

            // Load up our Visualforce Page
            p = System.Page.ViewReport;
            p.getParameters().put('dn','BlahBLahBlahBlahBlahBlah');
            Test.setCurrentPage(p);

            // Load up our Controller
            ctl = new ViewReportController();

            // Manually call the redirect() action,
            // and store the page that we are returned
            ret = ctl.redirect();

            // We should be sent to the Reports tab
            System.assert(ret.getURL().contains(
                '/'+Report.SObjectType.getDescribe().getKeyPrefix()+'/o'
            ));

        }

    }

}

And here’s an example of using this code in a Custom Link:

Voila! A ISV-caliber way to link to any Report in any Managed Package, and it won’t break in an org where the package is installed!

The basic flow of the Apex is pretty simple: the redirect method gets called immediately upon page load, and it returns a page reference to redirect the user to. So all that Apex needs to do for us is find the Report with the provided API name / DeveloperName (and optionally in a specific namespace), and send us to /<Id>, where <Id> is the Id of that Report. Pretty straightforward. Just a few interesting points:

  1. We ‘tack-on’ to the resultant page reference any OTHER page parameters that the user passed along, so we can pass in any number of dynamic Filter Criteria parameters using the pv<n> syntax.
  2. You may be wondering — wait, you can QUERY on the Report object? Yep! Reports are technically SObjects, so you can query for them, but they fall under the mysterious category called “Setup” objects which, among other peculiar quirks (Google “MIXED_DML_OPERATION” for one of the more annoying ones), only selectively obey CRUD and only expose some of their fields to SOQL. Fortunately, for our purposes, the Id, Name, DeveloperName, and NamespacePrefix fields are all included in this short list. Actually, fire up SOQLXplorer — you might be inspired by some of the other fields that are exposed.
  3. Namespacing of Reports — Reports included in Managed Packages don’t have to have a globally unique name — they only have to be unique within their Namespace. Therefore, when querying for Reports, it’s best to query for a report within a particular namespace.
  4. If the Report is not found — in our example, we send the User to the Reports tab. You might want to do something different.
Follow

Get every new post delivered to your Inbox.