Skip navigation

Category Archives: Uncategorized

To help foster an ongoing conversation among Salesforce ISV and OEM partners — aka developers of Salesforce AppExchange apps — I started this discussion on the Salesforce ISV Partners LinkedIn group, which I encourage fellow ISV’s/OEM’s to join:

Let’s pool our thoughts – best practices for Salesforce ISV/OEM app development

One of the best practices I brought up was the need to properly “protect” or “sandbox” your application’s references to external JavaScript libraries within complex “mash-up” pages that include JavaScript code written by various other ISV’s / OEM’s / consultants / developers, as well as underlying Salesforce.com JavaScript code.

These days, more and more apps being developed on the Salesforce Platform rely heavily on external JavaScript libraries such as jQuery, jQuery UI, jQuery Mobile, ExtJS, Sencha Touch, KnockoutJS, AngularJS, Backbone, MustacheJS, Underscore, JSONSelect, etc. Leveraging these libraries is definitely a best practice — don’t reinvent the wheel! As jQuery quips, “write less — do more!” As a Salesforce consultant, I think, this is generally the goal ūüôā

Problems emerge, though when multiple ISV’s include different versions of these JavaScript libraries as Global Variables within their Visualforce Pages or Visualforce Components — because whichever version is loaded last will, by default, overwrite the earlier version. This makes it very difficult to mash-up / integrate Visualforce Components or Pages from multiple ISV’s into a single page. When faced with this, a common developer response is to stop using the latest version of the external library and trying to make their code work against the earlier version of the library forcibly included in the page (perhaps by a managed Visualforce Component or embedded Visualforce Page).

Fortunately, there IS a better way to avoid this.

In a nutshell, the solution is: “protect” or “localize” your references to any external libraries, preferably in a JavaScript namespace corresponding to your managed package.

For instance, if your Salesforce application has the namespace “skuid”, you’re already going to probably have various JS Remoting functions available within the “skuid” JavaScript object that Salesforce automatically creates in pages with controllers with JS Remoting methods — and as an ISV, you are ensured of your managed app’s namespace being Salesforce-unique. So this namespace is just about the safest global variable you can use in the Salesforce world (anyone else who messes with it is being very, very naughty)

As a brief side-note, here’s how to ensure that your app’s “namespace global” has been defined before using it:

// Check that our Namespace Global has been defined as already,
// and if not, create it.
skuid || (window.skuid = {});

To protect your external library references, store a unique reference to these libraries within your namespace’s object, e.g. IMMEDIATELY after loading in an external library, such as MustacheJS,

// Load in Mustache -- will be stored as a global,
// thus accessible from window.Mustache
(function(){ /* MustacheJS 0.7.2 */ })()

// Store a protected reference to the MustacheJS library that WE loaded,
// so that we can safely refer to it later.
skuid.Mustache = window.Mustache;

Then, even if some other VF Component or Page loads in a different version of this library later on, you will still have access to the version you loaded (0.7.2)

// (other ISV's code)
window.Mustache = (function(){ /* MustacheJS 0.3.1 */ })()

// THIS code will run safely against 0.7.2!
skuid.Mustache.render('{{FirstName}} {{LastName}}',contactRecord);

// THIS code, however, would run against the latest version loaded in, e.g. 0.3.1,
// and thus FAILS, (since Mustache 0.3.1 has no render() method)
Mustache.render('{{Account.Name}}',contactRecord);

How to avoid having to use global references all the time

Some of you are probably thinking, “Great, now I have to prepend this global namespace all over the place!” Fortunately, with JavaScript, that’s not necessary. By wrapping your code in closures, you can safely keep using your familiar shorthand references to external libraries without worrying about version conflicts.

For instance, say that your application uses jQuery 1.8.2, but other Visualforce Components on the page are using jQuery as old as 1.3.2! (SO ancient… ūüôā What’s a developer to do?

Well, jQuery provides the helpful jQuery.noConflict() method, which allows you to easily obtain a safe reference to a version of jQuery immediately after you load it into your page. So, as an example, in YOUR code, you need to pull in jQuery 1.8.2:

<!-- load jQuery 1.8.3 -->
<script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script><script type="text/javascript">// <![CDATA[
// Get a safe reference to jQuery 1.8.3 var jQuery_1_8_3 = $.noConflict(true);
// ]]></script>

Then, to your horror, another Visualforce Component, that the customer you’re developing for has “got to have” in this same page (and which you don’t want to iframe…), has loaded in jQuery 1.3.2, but not bothered to namespace it!!! Therefore, both of the commonly-used jQuery globals (jQuery and $), now refer to jQuery 1.3.2!

Fortunately, FOR YOU, you’re safe! You got a protected reference to jQuery 1.8.3, and your code can carry on using $ without any issues, as long as you wrap it in a closure:

(function($){

   $('.nx-table').on('click','tr',function(){
       // Add "edit-mode" styles to this table row
       $(this).toggleClass('edit-mode');
   });

// Identify jQuery 1.8.3 as what we are referring to within this closure,
// so that we can carry on with $ as a shorthand reference to jQuery
// and be merry and happy like usual!
})(jQuery_1_8_3);
Advertisements

This one’s for you, coffee-shop hopping Force.com developers who go to make some changes to your code in Eclipse and — blocked! Login must use Security Token! Uggh. But I¬†hate¬†Security Tokens – sure, sure, it makes way more sense to use them. Definitely a best practice. But, well, sometimes I’m lazy. I confess – I very often just use Trusted IP Ranges.

So, for those of you out there that have been in my situation, and like me don’t like entering Security Tokens into your Eclipse, SOQLExplorer, Data Loader, etc., and prefer to just use Trusted IP Ranges and basic username/password, I’m guessing you find yourselves doing this over and over again:

  1. Show up at Coffee Shop you haven’t been to before – or at which you’ve never done any development on a particular Force.com project / org.
  2. Login to Eclipse.
  3. Try to save something.
  4. Fail – you’re at a non-trusted location and you’re not using the Security Token!
  5. Login to Salesforce.com.
  6. Go to your Personal Information.
  7. Copy the IP address from your most recent login.
  8. Go to Network Access, and add a Trusted IP Range, and paste in the IP Address from Step 7 into both Start and End IP Address. Click Save.
  9. Go back to Eclipse and resave.

Wish there was a way to do this faster?

There is – here’s a little convenience script for the uber-lazy, coffee-shop hopping, Security Token-hating Force.com developer – if there are any of you out there other than me, this if for you!

The Idea: a one-click link in a Home Page Component

My idea: create a Home Page Component for the Left Sidebar that, in one-click, adds a new Trusted IP Range corresponding to the current IP you’re at.

Problem 1: there’s no way to get your current IP address from just pure client-side JavaScript. Lots of server languages can give you this, but not client-side JavaScript.

Problem 2: how to quickly create a new Trusted IP Range? There’s no API access to the Trusted IP Range object (which is probably a good thing from a security perspective ūüôā

Problem 1 Solution: Use JSONP + an External Site

To make this a reality, I took the advice of StackOverflow and leveraged JSONP, which allows you to send an AJAX request to an external website and have the result returned in JSON format to a pre-designated callback JavaScript function, which immediately gets called. The basic syntax of this is

Problem 2 Solution: URL Redirect with params passed in

One thing I love about the Force.com platform is that almost every action you might want to take, whether for modification of metadata or data, can be initiated by going to a specific URL. To create a new Account record, you go to “/001/e”. To view all users, you go to “/005”. And almost any field on a particular page can be pre-populated by passing a corresponding URL Parameter (although these are not always consistent). And so it is with Trusted IP Ranges – to create a new Trusted IP Range, head to “/05G/e”. And to pre-populate the Start and End IP Address, you simply have to pass in the right query string parameters: “IpStartAddress” and “IpEndAddress”.

All together now! The final product.

So here’s the final solution:

  1. Go to Setup > Home Page Components > Custom Components.
  2. Click New.
  3. Enter a name, e.g. “Add IP to Trusted Ranges”, and select “HTML Area” as the type. Click Next.
  4. For Position, select “Narrow (Left) Column”.
  5. Click the “Show HTML” checkbox on the far right, and enter the following code for the body, then click Save.

<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script><script type="application/javascript">  function addIPToTrustedRanges(){  jQuery.getJSON( "https://smart-ip.net/geoip-json?callback=?", function(data){ var ip = data.host; window.location.href= "/05G/e?IpStartAddress="+ip+"&IpEndAddress="+ip; } ); } </script><a href="javascript:addIPToTrustedRanges();">Add Trusted IP (using Smart IP)</a>

6. Go to Home Page Layouts, and add this Home Page Component to your layout as a System Admin.

Now return to your home page, and voila! Here’s what you’ve got: a link that, when you click, it takes you to a prepopulated new Trusted IP Range – just click Save, and boom, get back to Eclipse and save your code!

TrustedIPRange_HomePageComponent

PrepopulatedIPAddressRange

How does it work?

Nothing crazy here, but I’ll step through the code anyhow:

  1. Load jQuery from the Google API’s CDN, so that we can use the handy getJSON() method.
  2. Define a global function called ¬†addIPToTrustedRanges – this function performs an AJAX callout to the endpoint ¬†“https://smart-ip.net/geoip-json&#8221;, passing in the name of a callback function to execute with the JSON data that this endpoint returns. So Smart IP basically returns a little string of data that is a JSON-serialized JavaScript object, whose “host” property is your current IP Address (the real one, not some NAT-translated private IP). Then jQuery calls your callback function, which, in the getJSON() method, is defined as an anonymous function in the second parameter — and passes in ¬†this JSON string as an argument to the function. This is all legal, within the confines of the JavaScript same-origin policy, because of JSONP.
  3. In our anonymous callback function, we do a URL redirect, passing in our current IP address as both the Start and End IP Address as parameters to the new Trusted IP Address page:
    "/05G/e?IpStartAddress="+ip+"&IpEndAddress="+ip;
  4. Then we click Save, and we’re done!

**Quick check to make sure this post is worth 5 minutes of your precious work day:**

If you have ever wanted:

(1) To run long chains of Batch Apex jobs in sequence without using up scheduled Apex jobs
(2) To run batch/scheduled Apex as often as every 5 minutes every day
(3) To manage, mass-edit, mass abort, and mass-activate all your scheduled and aggregable/chained Apex jobs in one place, in DATA
(4) To avoid the pain of unscheduling and rescheduling Apex jobs when deploying

Then keep reading ūüôā

A recent (well, maybe not so recent — is it June already???)¬†blog post by Matt Lacey really struck a chord with me. Matt highlights one of the biggest developer woes related to using Scheduled Apex — whenever you want to deploy code that is in some way referenced by an Apex Class (or classes) that is/are¬†scheduled, you have to¬†unschedule all of these classes. With Spring 12, Salesforce upped the limit on the number of Scheduled Apex Classes from 10 to 25 (ha–lellujah! ha–lellujah!). However, with 15 more scheduled jobs to work with, this have-to-unschedule-before-deploying problem becomes even more of a pain.

But this isn’t the only woe developers have related to asynchronous Apex. An issue that has shown up a lot lately on the Force.com ¬†Developer Boards and LinkedIn Groups is that of running Batch Apex jobs¬†in sequence — run one batch job, then run another immediately¬†afterwards. Cory Cowgill published an excellent solution for accomplishing this, and this has been the go-to method for linking Batch Apex jobs ever since. But one of the problems with his method is that it leaves a trail of scheduled jobs lying in its wake — seriously cluttering your CronTrigger table. If you want to run these batch process sequences often — e.g. kicking them off from a trigger, or running them every day (or even every hour!), you have to start “managing” your scheduled jobs more effectively.

A third issue often cited by developers is the frequency¬†at which jobs can be run. Through the UI, a single CronTrigger can only be scheduled to run once a day. Through code, you can get down to once an hour. If you wanted to, say, run a process once every 15 minutes, you’d have to schedule the same class 4 times — using up 4/25 (16%) of your allotted scheduled jobs — and you have to manage this through Apex, not the UI.

As I mulled over these issues, I thought, there’s got to be a better way.

There is.

Enter Relax

Relax is a lightweight app I’m about to release, but before I do, I’m throwing it out for y’all to try as a public beta. (The install link is at the end of the post, but restrain yourself, we’ll get there ūüôā )

Broadly speaking, Relax seeks to address all of these challenges, and a whole lot more.

At a high level, here are a few of the things it lets you do:

  1. Manage all your individually-scheduled Apex jobs in data records (through a Job__c object). Jobs can be mass scheduled and mass aborted, or mass exported between orgs, and then relaunched with clicks, not code.
  2. ¬†Create and schedule ordered Batch Apex “chains” of virtually unlimited length, with each “link” in the chain kicking off the next link as soon as it finishes. And all of your chained processes are handled by, on average, ONE Scheduled Job.
  3. Schedule jobs to be run as often as EVERY 1 MINUTE. ONE. UNO.
  4. Run ad-hoc “one-off” scheduled jobs without cutting into your 25
Let’s jump in.
 
Individual vs. Aggregable
In Relax, there are 2 types of jobs: individual and aggregable. You create a separate Job__c record corresponding to each of your Individual jobs, and all of the detail about each job is stored in its record. You can then separately or mass activate / deactivate your jobs simply by flipping the Active? checkbox. Each Individual job is separately scheduled — meaning there’s a one-to-one mapping between a CronTrigger record and an Individual Job__c record. Here’s what it looks like to create an Individual Job. You simply specify the CRON schedule defining when the job should run, and choose a class to run from a drop-down of Schedulable Apex Classes.

Aggregable¬†jobs, on the other hand, are all run as needed by the Relax Job Scheduler at arbitrary increments. For instance, you could have an aggregable job that runs once every 5 minutes that checks for new Cases created by users of your public Force.com Site, whose Site Guest User Profile does not have access to Chatter, and inserts Chatter Posts on appropriate records. You could have a job that swaps the SLA of Gold/Bronze SLA Accounts once every minute (contrived, yes, but OH so useful ūüôā Or you could have a series of 5 complex batch apex de-duplication routines that need to be run one after the other, set them all up as separate aggregable jobs, assign orders to them, and have the entire series run once every 15 minutes, every day. Here’s what the SLA swapper example looks like:

How do I use it?
¬†What do you have to do for your code to fit into the Relax framework? It’s extremely simple. For Individual Jobs, your Apex Class just needs to be Schedulable. For Aggregable Jobs, there are several options, depending on what kind of code you’d like to run. For most devs, this will be Batch Apex, so the most useful option at your disposal is to take any existing Batch Apex class you have and have it extend the “BatchableProcessStep” class that comes with Relax:

// relax's BatchableProcessStep implements Database.Batchable<sObject>,
// so all you have to do is override the start,execute,and finish methods
global class SwapSLAs extends relax.BatchableProcessStep {

	// Swaps the SLA's of our Gold and Bronze accounts

        // Note the override
	global override Database.Querylocator start(Database.BatchableContext btx) {
		return Database.getQueryLocator([
                     select SLA__c from Account where SLA__c in ('Gold','Bronze')
                ]);
	}

	global override void execute(Database.BatchableContext btx, List<SObject> scope) {
		List<Account> accs = (List<Account>) scope;
		for (Account a : accs) {
			if (a.SLA__c == 'Gold') a.SLA__c = 'Bronze';
			else if (a.SLA__c == 'Bronze') a.SLA__c = 'Gold';
		}
		update accs;
	}

        // The ProcessStep interface includes a complete() method
        // which you should call at the end of your finish() method
        // to allow relax to continue chains of Aggregable Jobs
	global override void finish(Database.BatchableContext btx) {
		// Complete this ProcessStep
		complete();
	}

}

That’s it! As long as you call the complete() method at the end of your finish() method, relax will be able to keep infinitely-long chains of Batch Jobs going. Plus, this framework is merely an extension to Database.batchable — meaning you can still call Database.executeBatch() on your aggregable Batch Apex and use it outside of the context of Relax.

Relax in action

In our org, we have 4 jobs set up: 1 individual, 3 aggregable. To kick them off, we select all of them, and change their Active? field to true. Here’s what it looks like after we’ve mass-activated them:

And here’s what the Scheduled Jobs table (accessible through the Setup menu) looks like. Since Case Escalator was set to be run individually, it has its own job. Then there’s a single “Relax Job Scheduler”, which runs every 5 minutes (I’ll probably drop this down to every 1 minute once I officially release the app), checking to see if there are any aggregable Jobs that need to be run, and running them:

Every time Relax Job Scheduler runs, the very first thing it does is schedules itself to run again — regardless of what happens during any processing that it initiates. It queries the “Next Run” field on each Active, Aggregable Job__c record that’s not already currently being run, and if Next Run is less than now, it queues it up to be run as part of a Relax “Process”. Each Process can have an arbitrary number of ProcessSteps, which will be executed sequentially until none remain. If both the current and next ProcessSteps are BatchableProcessSteps, Relax uses a “Process Balloon” to keep the Process “afloat” — essentially launching a temporary scheduled job that is immediately thrown away as soon as the next ProcessStep is begun.

One-off Jobs

Another powerful feature of Relax is the ability to effortlessly launch one-off, one-time scheduled jobs, without having to worry about cluttering the CronTrigger table with another scheduled job. It takes just 1 line of code, AND you can specify the name of the class to run¬†dynamically — e.g. as a String! Reflection ROCKS!!!

// Schedule your job to be run ASAP,
// but maintain the Job__c record so that we can review it later
relax.JobScheduler.CreateOneTimeJob('myNamespace.AccountCleansing');

// Schedule your job to be run 3 minutes from now,
// and delete the Job__c record after it's run
relax.JobScheduler.CreateOneTimeJob(
    'SwapSLAs',
    Datetime.now().addMinutes(3),
    true
);

Try it out for yourself!

Does this sound awesome to you? Give it a test run! Install the app right now into any org you’d like! Version 1.1 (managed-released)

Please send me any feedback you may have! What would make this more usable for you? I’ll probably be releasing the app within the next month or so, so stay tuned!

When I hear the words “Reports” and “Managed Packages” in the same sentence, I involuntarily let out a grunt of displeasure. Ask any seasoned ISV, and I guarantee you that the same sour taste fills their mouths. Why? Well, here’s the classic problem: An ISV includes some Reports in their managed package. Now, a common trick for making Reports “dynamic” is to leave one of the Filter Criteria blank and then have its value passed in through query string parameters using the “pv<n>” syntax, where n is the parameter you’d like to populate. For example, in this report of Enrollments at a given School, parameter 2 (0-indexed) is left BLANK:

Then, if we load up this page with query string parameter “pv2” set to the name of a School, like so:

the value that we passed in will be dynamically inserted into the 2nd Filter Criterion, and we’ll have a report on Enrollments at Lincoln High School:

This is awesome, right? Quick, let’s throw a Custom Link on the Account Detail Page called “Enrollments” that links to this report, but passing in the Id of the Report! Yeah, yeah, yeah! I love this!

Hold your horses, partner.

This is where the ISV’s hang their heads in sadness… sorry son, it just ain’t that easy.

What’s the matter, grandpa? Come on, this is child’s play!

Not quite.

Notice where we said that we’d be passing in the¬†ID of the Report. Hard-coded. For an ISV, ID’s are the ultimate ¬†taboo. Why? Well, sure, you can package up the Report and Custom Link. But as soon as you install the package into a customer’s org,¬†the Id of the Report has CHANGED — and the link is BROKEN. It’s a situation that the typical one-org Admin will never have to face, but, well, welcome to the world of the ISV.

Isn’t there one of those handy Global Variables which lets you grab the Name or DeveloperName of a Report?

Nope, sorry partner.

So, what DO you do?

Well, you write a ‘ViewReport’ Visualforce page that takes in the API name of a Report — which does NOT change across all the orgs that a package is installed into — and uses this API name to find the ID of the Report and send you to it. What does this look like?

The Visualforce is simple — one line, to be exact:


<apex:page controller="ViewReportController" action="{!redirect}"/>

The Apex Controller is a little more interesting. Here’s the meat, with test code included that achieves 100% coverage (so you can start using it right away!!!):


public class ViewReportController {

    // Controller for ViewReport.page,
    // which redirects the user to a Salesforce Report
    // whose name is passed in as a Query String parameter

    // We expect to be handed 1-2 parameters:
    // r: the DEVELOPER name of the Report you want to view
    // ns: a salesforce namespace prefix (optional)
    public PageReference redirect() {
        // Get all page parameters
        Map<String,String> params = ApexPages.currentPage().getParameters();

        String ns = params.get('ns'); // NamespacePrefix
        String dn = params.get('dn'); // DeveloperName

        List<Report> reports;

        // If a Namespace is provided,
        // then find the report with the specified DeveloperName
        // in the provided Namespace
        // (otherwise, we might find a report in the wrong namespace)
        if (ns != null) {
            reports = [select Id from Report
                  where NamespacePrefix = :ns
                  and DeveloperName = :dn limit 1];
        } else {
            reports = [select Id from Report where DeveloperName = :dn limit 1];
        }

        PageReference pgRef;

        // If we found a Report, go view it
        if (reports != null && !reports.isEmpty()) {
            pgRef = new PageReference('/' + reports[0].Id);
            // Add back in all of the parameters we were passed in,
            // MINUS the ones we already used: ns, dn
            params.remove('ns');
            params.remove('dn');
            pgRef.getParameters().putAll(params);
        } else {
            // We couldn't find the Report,
            // so send the User to the Reports tab
            pgRef = new PageReference('/'
                + Report.SObjectType.getDescribe().getKeyPrefix()
                + '/o'
            );
        }

        // Navigate to the page we've decided on
        pgRef.setRedirect(true);
        return pgRef;

    }

    ////////////////////
    // UNIT TESTS
    ////////////////////

    // We MUST be able to see real Reports for this to work,
    // because we can't insert test Reports.
    // Therefore, in Spring 12, we must use the SeeAllData annotation
    @isTest(SeeAllData=true)
    private static void Test_Controller() {
        // For this example, we assume that there is
        // at least one Report in our org WITH a namespace

        // Get a report to work with
        List<Report> reports = [
            select Id, DeveloperName, NamespacePrefix
            from Report
            where NamespacePrefix != null
            limit 1
        ];

        // Assuming that we have reports...
        if (!reports.isEmpty()) {
            // Get the first one in our list
            Report r = reports[0];

            //
            // CASE 1: Passing in both namespace, developername,
            // and a parameter value
            //

            // Load up our Visualforce Page
            PageReference p = System.Page.ViewReport;
            p.getParameters().put('ns',r.NamespacePrefix);
            p.getParameters().put('dn',r.DeveloperName);
            p.getParameters().put('pv0','llamas');
            p.getParameters().put('pv2','alpacas');
            Test.setCurrentPage(p);

            // Load up our Controller
            ViewReportController ctl = new ViewReportController();

            // Manually call the redirect() action,
            // and store the page that we are returned
            PageReference ret = ctl.redirect();

            // We should be sent to the View page for our Report
            System.assert(ret.getURL().contains('/'+r.Id));
            // Also, make sure that our Filter Criterion values
            // got passed along
            System.assert(ret.getURL().contains('pv0=llamas'));
            System.assert(ret.getURL().contains('pv2=alpacas'));

            //
            // CASE 2: Passing in both just developername
            //

            // Load up our Visualforce Page
            p = System.Page.ViewReport;
            p.getParameters().put('dn',r.DeveloperName);
            Test.setCurrentPage(p);

            // Load up our Controller
            ctl = new ViewReportController();

            // Manually call the redirect() action,
            // and store the page that we are returned
            ret = ctl.redirect();

            // We should be sent to the View page for our Report
            System.assert(ret.getURL().contains('/'+r.Id));

            //
            // CASE 3: Passing in a nonexistent Report name
            //

            // Load up our Visualforce Page
            p = System.Page.ViewReport;
            p.getParameters().put('dn','BlahBLahBlahBlahBlahBlah');
            Test.setCurrentPage(p);

            // Load up our Controller
            ctl = new ViewReportController();

            // Manually call the redirect() action,
            // and store the page that we are returned
            ret = ctl.redirect();

            // We should be sent to the Reports tab
            System.assert(ret.getURL().contains(
                '/'+Report.SObjectType.getDescribe().getKeyPrefix()+'/o'
            ));

        }

    }

}

And here’s an example of using this code in a Custom Link:

Voila! A ISV-caliber way to link to any Report in any Managed Package, and it won’t break in an org where the package is installed!

The basic flow of the Apex is pretty simple: the redirect method gets called immediately upon page load, and it returns a page reference to redirect the user to. So all that Apex needs to do for us is find the Report with the provided API name / DeveloperName (and optionally in a specific namespace), and send us to /<Id>, where <Id> is the Id of that Report. Pretty straightforward. Just a few interesting points:

  1. We ‘tack-on’ to the resultant page reference any OTHER page parameters that the user passed along, so we can pass in any number of dynamic Filter Criteria parameters using the¬†pv<n> syntax.
  2. You may be wondering — wait, you can QUERY on the Report object? Yep! Reports are technically SObjects, so you can query for them, but they fall under the mysterious category called “Setup” objects which, among other peculiar quirks (Google “MIXED_DML_OPERATION” for one of the more annoying ones), only selectively obey CRUD and only expose some of their fields to SOQL. Fortunately, for our purposes, the Id, Name, DeveloperName, and NamespacePrefix fields are all included in this short list. Actually, fire up SOQLXplorer — you might be inspired by some of the other fields that¬†are exposed.
  3. Namespacing of Reports — Reports included in Managed Packages don’t have to have a globally unique name — they only have to be unique within their Namespace. Therefore, when querying for Reports, it’s best to query for a report within a particular namespace.
  4. If the Report is not found — in our example, we send the User to the Reports tab. You might want to do something different.