Skip navigation

Category Archives: Uncategorized

TL;DR (for non-millenials: “Summary”):

When you’re traveling for business, bring these 5 things with you, and you’ll be able to nearly eliminate your day-to-day business trip waste:

  1. Stainless-steel water bottle

  2. Coffee thermos

  3. Metal fork

  4. Metal spoon

  5. Napkin

Optional:

  1. A few small, light, reusable bags and containers

IMG_3925

Coffee

I don’t have to travel very often for work, but every time I do, I’m stunned by the sheer amount of waste I see all around me. Every day, over 2.25 BILLION cups of coffee are consumed world-wide (according to this 2002 study, so the numbers have increased substantially since then). What’s the big deal with that? Well, setting aside issues involved in production and transportation (for a different post… :), if you get your coffee to-go in a “paper” cup, those cups are NOT recyclable, because they’re lined with plastic – they all end up going to the landfill.

In the United States, most of the 500 million+ cups of coffee we drink every day are served in these non-recyclable plastic-lined paper cups. When I’m traveling through the Atlanta airport, I get a very real sense of the staggering volume of this waste. Every minute I walk by hundreds of people, most of them with a plastic-lined paper coffee cup in hand. Craig Reucassel did a famous stunt as part of his “War on Waste” in Melbourne, Australia to try to help people grasp just how huge the problem of throwaway coffee cups really is – he filled a city bus to the brim with the amount of coffee cups that get thrown away every 30 minutes.

So, what can we do about it?

Actually, it’s really, really, really easy — bring your own coffee thermos with you, everywhere.

I have never yet been to a coffee shop, whether in my hometown of Chattanooga, in the airport, or in any of the cities I’ve traveled to recently, that was not willing to fill my coffee thermos with whatever drink I wanted to purchase. Many coffee shops even give discounts if you bring your own cup / thermos. My wife is currently in process of compiling a list of coffee shops in Chattanooga that give you discounted coffee if you bring your own mug, but so far it’s been the majority.

While in Seattle, for example, I brought my own mug to the offices of the client we were visiting and filled it from their in-office drip coffee. This morning, before heading to the airport, I went to the coffeeshop downstairs from the Airbnb and asked them to fill my mug, and they were happy to!

If everyone in the world drank coffee every day from reusable coffee mugs, we would literally be removing MOUNTAINS of waste from the world.

Every year, we throw away 50 BILLION paper coffee cups — just in the United States.

To try to help us visualize what 50 billion paper coffee cups looks like, ECO2Greetings created this amazing visualization, which hit home for me as I have been in Seattle this week for business:

waste-mountain

https://www.eco2greetings.com/News/waste-mountain-of-coffee-cups.html

Water

When traveling, bring your own reusable water bottle. I fill mine up in the morning before I head to the airport, and finish drinking it by the time I get to security. I drink a ton of water so usually I fill mine up when I get to the airport and then drink it all again before I get to security, but you should be able to get down at least one full bottle before the security line. If not, you’re probably not drinking enough water to begin with, but that’s another topic – okay just a tiny tangent, your body is anywhere between 40-65% water, so it’s common sense that you need to drink a lot of it to maintain your body working as it should be, but did you know drinking just one glass of water before a test improves scores by 10% on average?

Once you get through security, fill up your bottle again, and then you can take it on the airplane with you.

Most major airports now have water-bottle filling stations that make it easy to quickly refill your bottle, and companies like Amazon are even starting to install them in their offices everywhere that they have water fountains:

IMG_3919

What’s wrong with just buying bottled water when you’re on the go?

SO MANY THINGS. Here’s a quick summary:

  • Price: Would you pay $10,000 for a cookie? The price of tap water in the US is < $0.01 / gallon on average. A plastic bottle of water costs anywhere from $1.00 to $6 / liter (I saw $6 bottles of water in a hotel in San Francisco the last time I traveled there!!!) Do the math. When you buy bottled water, you’re paying a 10,000x markup over tap water. And airports and restaurants give you tap water for free.
  • Quality: About 24% of bottled water in the US is just tap water put in plastic bottles. Moreover, the EPA subjects tap water to publicly-disclosed quality assurance tests, whereas bottled water is regulated by the FDA, and the FDA does not disclose the results of these tests to the public. (More info: https://www.banthebottle.net/bottled-water-facts/ )
  • Transportation: Tap water comes from local sources in whatever communities you happen to be in, whereas bottled water often gets shipped thousands of miles, sometimes across the ocean, consuming millions of gallons of gas annually. The irony is staggering – consider Fiji water. Why in the world do we send giant container ships across thousands of miles of water to Fiji, to take water away from the people living in Fiji, and then thousands of miles back, using up huge quantities of oil in the process, and then put this water in plastic bottles, which uses up huge quantities of oil, and ship it across the country – using up more oil – to people who can literally get free tap water out their faucet.
  • Sourcing: Companies like Nestle frequently illegally extract water from communities that desperately need it.  This is wrong on so many levels, I’ll let you read the Story of Stuff’s extensive investigative work here rather than rehashing it on my own.
  • Packaging / Recycling / Waste: Plastic comes from petroleum. We are sending huge machines to the bottom of the ocean, to remote regions of Alaska, etc. to dig up more and more oil. Currently, 8% of the world’s oil is used to make plastic. By 2050, that is projected to increase to 20%. Nearly half of all plastic ever produced has been made since the year 2000. I could go on. Plastic is amazing when it’s used for creating durable, flexible, structural materials. But it’s a terrible choice for single-use, disposable containers – which unfortunately we’re using plastic for more and more. 40% of plastic produced is used for single-use, throw-away packaging.

Utensils

Whether you’re trying to get a quick bite to eat in the airport, having a lunch meeting, or attending a conference and getting the conference-provided lunch, you’re almost certainly not going to be given reusable, metal silverware to use. Solution? Bring your own. I always bring my own fork and spoon wrapped up in a cloth napkin in my laptop bag, so that no matter where I end up over the course of a day of business travel, I can always pull out my own fork and spoon.

Salesforce.com’s Dreamforce conference is, overall, amazing in terms of its commitment to sustainability. For years, they have served their conference meals in compostable containers or on compostable plates, with compostable silverware. Obviously, though, it still takes energy to produce and compost all these compostable materials — and many companies just use plastic utensils and plates at their conferences. So, if you can avoid at least part of this waste by just bringing your own utensils… that helps a lot in the long run. The plates, or bowls if soup is being served, are much harder to replace with your own dishes while traveling — plates and bowls are heavy and awkward to carry around in your bag, and it’s a lot harder, and not necessarily more energy-efficient in the long run, to clean them out after every meal. Utensils, on the other hand, are an easy, quick win in the effort to be zero-waste / low-impact. Bring your own, wherever you go!

Buy in bulk, eat whole foods, bring your own bags

My family and I try to eat a whole-foods, plant-based diet as much as we can. We buy basically all of our food either from bulk bins or the fresh-produce aisle, using all of our own reusable containers (to learn more: Zero Waste Chattanooga). Our breakfast of choice is oatmeal with raisins, apples, cinnamon, a little cloves and a bit of molasses, and a few walnuts when we have them. Our little girls love their “yoat-meal” and we do too! We recently bought a 75-pound bag of organic quick oats from Whole Foods in order to eliminate having to bring so many containers for oatmeal on our bi-monthly trips there… and that will last us maybe 2 months 🙂

I’d love to be able to eat my daily oatmeal while traveling, just the way I like it with lots of whole fruits, without having to pay the ridiculous prices people charge you for oatmeal at restaurants.

The secret – bring your own reusable bags with you on your trips, and stay somewhere with a microwave / stove so that you can cook your own breakfasts.

On this week’s trip to Seattle, I chose to stay in an Airbnb a few blocks away from the offices where my meetings were at, so that I could not only walk to / from everywhere but also make my own breakfasts in the morning. I find that on business trips you’re usually eating with coworkers or clients for lunch and dinner, but breakfast is often a good chance to make your own food and save some money.

So on the first day, I walked about 6 blocks to the Whole Foods in downtown Seattle with some reusable bags and containers in my backpack, and bought enough oatmeal, raisins, apples, and walnuts for me to have heaping-full bowls of delicious home-cooked oatmeal every day I was there:


If you’ve never bought in bulk before, all you have to do is to ask an employee to “tare” your containers the first time you use a given reusable container, and then write the tare on the container in marker. Then, fill your container with whatever thing you want to buy from the bulk bin, and then use your phone to snap a photo of the PLU number of each item. When you checkout, just tell the cashier the tare of each container and give them the PLU number. Whole Foods even gives you a discount for bringing your own containers!

IMG_3911

One thing that’s problematic when traveling for business: receipts. Before paying for my items at home, I always try to remember to tell the cashier that you do NOT want a receipt. Receipts are a huge source of waste, as they CANNOT be recycled with other paper because they are laced with plastic. With the advent of new payment systems that allow you to have receipts emailed, texted, or not sent at all, this is slowly becoming less of a problem, but it’s a hassle when traveling for business because if I want to expense my food costs, I have to have a receipt. Ideally I can just use an electronic receipt, but when that’s not possible I have to have the receipt printed and then take a picture of it. Unfortunately then I have to put the receipts in the landfill trash 😦

On the flip side, one thing I love about Seattle is that basically everywhere you go, from restaurants to parks to corporate offices, there are compost bins! At my office, I pushed for us to have compost bins available for coffee grounds and food waste to be placed in, and at the end of every week I or a coworker brings the bins home and adds them to our own compost pile.

 

Airport food and snacks

When traveling, I want to be feeling awake and alert, and I don’t want to get sick. Eating a whole foods plant-based diet has been incredible for many reasons, but one of the best things is that I rarely if ever get sick, am never tired during the day (unless my wife and I stay up late playing Age of Empires or our little girls wake us up in the middle of the night… both of which are unavoidable parts of life 🙂

There’s no reason that this has to be any difference while traveling. The solution: bring your own meals and snacks to the airport. Many people don’t realize that TSA is totally cool with you bringing entire sandwiches, salads, virtually whatever the heck you want, right through security. You do NOT have to buy food after going through security — you can make your own favorite foods at home and bring them to the airport!

One the way here to Seattle, I brought leftover pancakes with peanut butter and maple syrup — our go-to Saturday morning breakfast — and ate them before heading onto the plane. My wonderful wife made some granola (another reason we go through so much oatmeal) the night before, and threw it into a container with some seeds and raisins. I brought that heaping container with me on the plane with me and was snacking on it all the way… garnering lots of longing looks from those in the seats next to me. I also usually bring along apples to snack on while traveling.

IMG_3928

Having your own healthy, whole-food snacks not only makes it easy to say no to the plastic-wrapped junk-food snacks in the airport and on the airplane, but it helps keep you healthy while traveling. Eating sugary or fatty foods – yes, even that precious Chik-Fil-A sandwich — while being under the stress of traveling weakens your immune system and makes you that much more likely to succumb to the innumerable germs you’re unavoidably going to encounter in the airport, airplane, trains, or client offices. Plus, having good foods makes you more alert – that afternoon fried-chicken coma doesn’t help with closing deals or paying attention in hour-long conference lectures 🙂

Booth giveaways and hotel shampoos

What happens to all of those freebies that companies give away at trade-shows and conferences? Excluding the awesome socks and t-shirts that my company Skuid produces :), how many of the other booth giveaways are you still using a year after the conference? So often, many of these just end up getting thrown away.

The solution: just say no to freebies and chotchkies. Both as a company manning a booth and an employee visiting another company’s booth — really thing hard about ways to eliminate the waste generated by booth giveaways. People don’t need that many shirts – is it worth the water, oil, and sweatshop labor it took for your company to buy 500 of those shirts for $10 / each? Do you really need another t-shirt to add to your drawer of 30 conference t-shirts that you never wear and will eventually donate to Goodwill in 10 years?

Hotel shampoos are another common source of plastic waste, that is also easy to avoid. One option: use Airbnb — usually your Airbnb host will provide shampoo and all that in bulk containers, then you don’t even need to worry about bringing your own. Or, just bring your own shampoos in reusable, travel-sized bottles. Very easy, totally kosher with TSA, just remember to pull them out when going through the screeners to avoid getting searched afterwards.

Conclusion

I hope this has been helpful thinking of ways to reduce your environmental impact and on business trips, would love to hear your thoughts in the comments!

 

Advertisements

To help foster an ongoing conversation among Salesforce ISV and OEM partners — aka developers of Salesforce AppExchange apps — I started this discussion on the Salesforce ISV Partners LinkedIn group, which I encourage fellow ISV’s/OEM’s to join:

Let’s pool our thoughts – best practices for Salesforce ISV/OEM app development

One of the best practices I brought up was the need to properly “protect” or “sandbox” your application’s references to external JavaScript libraries within complex “mash-up” pages that include JavaScript code written by various other ISV’s / OEM’s / consultants / developers, as well as underlying Salesforce.com JavaScript code.

These days, more and more apps being developed on the Salesforce Platform rely heavily on external JavaScript libraries such as jQuery, jQuery UI, jQuery Mobile, ExtJS, Sencha Touch, KnockoutJS, AngularJS, Backbone, MustacheJS, Underscore, JSONSelect, etc. Leveraging these libraries is definitely a best practice — don’t reinvent the wheel! As jQuery quips, “write less — do more!” As a Salesforce consultant, I think, this is generally the goal 🙂

Problems emerge, though when multiple ISV’s include different versions of these JavaScript libraries as Global Variables within their Visualforce Pages or Visualforce Components — because whichever version is loaded last will, by default, overwrite the earlier version. This makes it very difficult to mash-up / integrate Visualforce Components or Pages from multiple ISV’s into a single page. When faced with this, a common developer response is to stop using the latest version of the external library and trying to make their code work against the earlier version of the library forcibly included in the page (perhaps by a managed Visualforce Component or embedded Visualforce Page).

Fortunately, there IS a better way to avoid this.

In a nutshell, the solution is: “protect” or “localize” your references to any external libraries, preferably in a JavaScript namespace corresponding to your managed package.

For instance, if your Salesforce application has the namespace “skuid”, you’re already going to probably have various JS Remoting functions available within the “skuid” JavaScript object that Salesforce automatically creates in pages with controllers with JS Remoting methods — and as an ISV, you are ensured of your managed app’s namespace being Salesforce-unique. So this namespace is just about the safest global variable you can use in the Salesforce world (anyone else who messes with it is being very, very naughty)

As a brief side-note, here’s how to ensure that your app’s “namespace global” has been defined before using it:

// Check that our Namespace Global has been defined as already,
// and if not, create it.
skuid || (window.skuid = {});

To protect your external library references, store a unique reference to these libraries within your namespace’s object, e.g. IMMEDIATELY after loading in an external library, such as MustacheJS,

// Load in Mustache -- will be stored as a global,
// thus accessible from window.Mustache
(function(){ /* MustacheJS 0.7.2 */ })()

// Store a protected reference to the MustacheJS library that WE loaded,
// so that we can safely refer to it later.
skuid.Mustache = window.Mustache;

Then, even if some other VF Component or Page loads in a different version of this library later on, you will still have access to the version you loaded (0.7.2)

// (other ISV's code)
window.Mustache = (function(){ /* MustacheJS 0.3.1 */ })()

// THIS code will run safely against 0.7.2!
skuid.Mustache.render('{{FirstName}} {{LastName}}',contactRecord);

// THIS code, however, would run against the latest version loaded in, e.g. 0.3.1,
// and thus FAILS, (since Mustache 0.3.1 has no render() method)
Mustache.render('{{Account.Name}}',contactRecord);

How to avoid having to use global references all the time

Some of you are probably thinking, “Great, now I have to prepend this global namespace all over the place!” Fortunately, with JavaScript, that’s not necessary. By wrapping your code in closures, you can safely keep using your familiar shorthand references to external libraries without worrying about version conflicts.

For instance, say that your application uses jQuery 1.8.2, but other Visualforce Components on the page are using jQuery as old as 1.3.2! (SO ancient… 🙂 What’s a developer to do?

Well, jQuery provides the helpful jQuery.noConflict() method, which allows you to easily obtain a safe reference to a version of jQuery immediately after you load it into your page. So, as an example, in YOUR code, you need to pull in jQuery 1.8.2:

<!-- load jQuery 1.8.3 -->
<script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script><script type="text/javascript">// <![CDATA[
// Get a safe reference to jQuery 1.8.3 var jQuery_1_8_3 = $.noConflict(true);
// ]]></script>

Then, to your horror, another Visualforce Component, that the customer you’re developing for has “got to have” in this same page (and which you don’t want to iframe…), has loaded in jQuery 1.3.2, but not bothered to namespace it!!! Therefore, both of the commonly-used jQuery globals (jQuery and $), now refer to jQuery 1.3.2!

Fortunately, FOR YOU, you’re safe! You got a protected reference to jQuery 1.8.3, and your code can carry on using $ without any issues, as long as you wrap it in a closure:

(function($){

   $('.nx-table').on('click','tr',function(){
       // Add "edit-mode" styles to this table row
       $(this).toggleClass('edit-mode');
   });

// Identify jQuery 1.8.3 as what we are referring to within this closure,
// so that we can carry on with $ as a shorthand reference to jQuery
// and be merry and happy like usual!
})(jQuery_1_8_3);

This one’s for you, coffee-shop hopping Force.com developers who go to make some changes to your code in Eclipse and — blocked! Login must use Security Token! Uggh. But I hate Security Tokens – sure, sure, it makes way more sense to use them. Definitely a best practice. But, well, sometimes I’m lazy. I confess – I very often just use Trusted IP Ranges.

So, for those of you out there that have been in my situation, and like me don’t like entering Security Tokens into your Eclipse, SOQLExplorer, Data Loader, etc., and prefer to just use Trusted IP Ranges and basic username/password, I’m guessing you find yourselves doing this over and over again:

  1. Show up at Coffee Shop you haven’t been to before – or at which you’ve never done any development on a particular Force.com project / org.
  2. Login to Eclipse.
  3. Try to save something.
  4. Fail – you’re at a non-trusted location and you’re not using the Security Token!
  5. Login to Salesforce.com.
  6. Go to your Personal Information.
  7. Copy the IP address from your most recent login.
  8. Go to Network Access, and add a Trusted IP Range, and paste in the IP Address from Step 7 into both Start and End IP Address. Click Save.
  9. Go back to Eclipse and resave.

Wish there was a way to do this faster?

There is – here’s a little convenience script for the uber-lazy, coffee-shop hopping, Security Token-hating Force.com developer – if there are any of you out there other than me, this if for you!

The Idea: a one-click link in a Home Page Component

My idea: create a Home Page Component for the Left Sidebar that, in one-click, adds a new Trusted IP Range corresponding to the current IP you’re at.

Problem 1: there’s no way to get your current IP address from just pure client-side JavaScript. Lots of server languages can give you this, but not client-side JavaScript.

Problem 2: how to quickly create a new Trusted IP Range? There’s no API access to the Trusted IP Range object (which is probably a good thing from a security perspective 🙂

Problem 1 Solution: Use JSONP + an External Site

To make this a reality, I took the advice of StackOverflow and leveraged JSONP, which allows you to send an AJAX request to an external website and have the result returned in JSON format to a pre-designated callback JavaScript function, which immediately gets called. The basic syntax of this is

Problem 2 Solution: URL Redirect with params passed in

One thing I love about the Force.com platform is that almost every action you might want to take, whether for modification of metadata or data, can be initiated by going to a specific URL. To create a new Account record, you go to “/001/e”. To view all users, you go to “/005”. And almost any field on a particular page can be pre-populated by passing a corresponding URL Parameter (although these are not always consistent). And so it is with Trusted IP Ranges – to create a new Trusted IP Range, head to “/05G/e”. And to pre-populate the Start and End IP Address, you simply have to pass in the right query string parameters: “IpStartAddress” and “IpEndAddress”.

All together now! The final product.

So here’s the final solution:

  1. Go to Setup > Home Page Components > Custom Components.
  2. Click New.
  3. Enter a name, e.g. “Add IP to Trusted Ranges”, and select “HTML Area” as the type. Click Next.
  4. For Position, select “Narrow (Left) Column”.
  5. Click the “Show HTML” checkbox on the far right, and enter the following code for the body, then click Save.

<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script><script type="application/javascript">  function addIPToTrustedRanges(){  jQuery.getJSON( "https://smart-ip.net/geoip-json?callback=?", function(data){ var ip = data.host; window.location.href= "/05G/e?IpStartAddress="+ip+"&IpEndAddress="+ip; } ); } </script><a href="javascript:addIPToTrustedRanges();">Add Trusted IP (using Smart IP)</a>

6. Go to Home Page Layouts, and add this Home Page Component to your layout as a System Admin.

Now return to your home page, and voila! Here’s what you’ve got: a link that, when you click, it takes you to a prepopulated new Trusted IP Range – just click Save, and boom, get back to Eclipse and save your code!

TrustedIPRange_HomePageComponent

PrepopulatedIPAddressRange

How does it work?

Nothing crazy here, but I’ll step through the code anyhow:

  1. Load jQuery from the Google API’s CDN, so that we can use the handy getJSON() method.
  2. Define a global function called  addIPToTrustedRanges – this function performs an AJAX callout to the endpoint  “https://smart-ip.net/geoip-json&#8221;, passing in the name of a callback function to execute with the JSON data that this endpoint returns. So Smart IP basically returns a little string of data that is a JSON-serialized JavaScript object, whose “host” property is your current IP Address (the real one, not some NAT-translated private IP). Then jQuery calls your callback function, which, in the getJSON() method, is defined as an anonymous function in the second parameter — and passes in  this JSON string as an argument to the function. This is all legal, within the confines of the JavaScript same-origin policy, because of JSONP.
  3. In our anonymous callback function, we do a URL redirect, passing in our current IP address as both the Start and End IP Address as parameters to the new Trusted IP Address page:
    "/05G/e?IpStartAddress="+ip+"&IpEndAddress="+ip;
  4. Then we click Save, and we’re done!

**Quick check to make sure this post is worth 5 minutes of your precious work day:**

If you have ever wanted:

(1) To run long chains of Batch Apex jobs in sequence without using up scheduled Apex jobs
(2) To run batch/scheduled Apex as often as every 5 minutes every day
(3) To manage, mass-edit, mass abort, and mass-activate all your scheduled and aggregable/chained Apex jobs in one place, in DATA
(4) To avoid the pain of unscheduling and rescheduling Apex jobs when deploying

Then keep reading 🙂

A recent (well, maybe not so recent — is it June already???) blog post by Matt Lacey really struck a chord with me. Matt highlights one of the biggest developer woes related to using Scheduled Apex — whenever you want to deploy code that is in some way referenced by an Apex Class (or classes) that is/are scheduled, you have to unschedule all of these classes. With Spring 12, Salesforce upped the limit on the number of Scheduled Apex Classes from 10 to 25 (ha–lellujah! ha–lellujah!). However, with 15 more scheduled jobs to work with, this have-to-unschedule-before-deploying problem becomes even more of a pain.

But this isn’t the only woe developers have related to asynchronous Apex. An issue that has shown up a lot lately on the Force.com  Developer Boards and LinkedIn Groups is that of running Batch Apex jobs in sequence — run one batch job, then run another immediately afterwards. Cory Cowgill published an excellent solution for accomplishing this, and this has been the go-to method for linking Batch Apex jobs ever since. But one of the problems with his method is that it leaves a trail of scheduled jobs lying in its wake — seriously cluttering your CronTrigger table. If you want to run these batch process sequences often — e.g. kicking them off from a trigger, or running them every day (or even every hour!), you have to start “managing” your scheduled jobs more effectively.

A third issue often cited by developers is the frequency at which jobs can be run. Through the UI, a single CronTrigger can only be scheduled to run once a day. Through code, you can get down to once an hour. If you wanted to, say, run a process once every 15 minutes, you’d have to schedule the same class 4 times — using up 4/25 (16%) of your allotted scheduled jobs — and you have to manage this through Apex, not the UI.

As I mulled over these issues, I thought, there’s got to be a better way.

There is.

Enter Relax

Relax is a lightweight app I’m about to release, but before I do, I’m throwing it out for y’all to try as a public beta. (The install link is at the end of the post, but restrain yourself, we’ll get there 🙂 )

Broadly speaking, Relax seeks to address all of these challenges, and a whole lot more.

At a high level, here are a few of the things it lets you do:

  1. Manage all your individually-scheduled Apex jobs in data records (through a Job__c object). Jobs can be mass scheduled and mass aborted, or mass exported between orgs, and then relaunched with clicks, not code.
  2.  Create and schedule ordered Batch Apex “chains” of virtually unlimited length, with each “link” in the chain kicking off the next link as soon as it finishes. And all of your chained processes are handled by, on average, ONE Scheduled Job.
  3. Schedule jobs to be run as often as EVERY 1 MINUTE. ONE. UNO.
  4. Run ad-hoc “one-off” scheduled jobs without cutting into your 25
Let’s jump in.
 
Individual vs. Aggregable
In Relax, there are 2 types of jobs: individual and aggregable. You create a separate Job__c record corresponding to each of your Individual jobs, and all of the detail about each job is stored in its record. You can then separately or mass activate / deactivate your jobs simply by flipping the Active? checkbox. Each Individual job is separately scheduled — meaning there’s a one-to-one mapping between a CronTrigger record and an Individual Job__c record. Here’s what it looks like to create an Individual Job. You simply specify the CRON schedule defining when the job should run, and choose a class to run from a drop-down of Schedulable Apex Classes.

Aggregable jobs, on the other hand, are all run as needed by the Relax Job Scheduler at arbitrary increments. For instance, you could have an aggregable job that runs once every 5 minutes that checks for new Cases created by users of your public Force.com Site, whose Site Guest User Profile does not have access to Chatter, and inserts Chatter Posts on appropriate records. You could have a job that swaps the SLA of Gold/Bronze SLA Accounts once every minute (contrived, yes, but OH so useful 🙂 Or you could have a series of 5 complex batch apex de-duplication routines that need to be run one after the other, set them all up as separate aggregable jobs, assign orders to them, and have the entire series run once every 15 minutes, every day. Here’s what the SLA swapper example looks like:

How do I use it?
 What do you have to do for your code to fit into the Relax framework? It’s extremely simple. For Individual Jobs, your Apex Class just needs to be Schedulable. For Aggregable Jobs, there are several options, depending on what kind of code you’d like to run. For most devs, this will be Batch Apex, so the most useful option at your disposal is to take any existing Batch Apex class you have and have it extend the “BatchableProcessStep” class that comes with Relax:

// relax's BatchableProcessStep implements Database.Batchable<sObject>,
// so all you have to do is override the start,execute,and finish methods
global class SwapSLAs extends relax.BatchableProcessStep {

	// Swaps the SLA's of our Gold and Bronze accounts

        // Note the override
	global override Database.Querylocator start(Database.BatchableContext btx) {
		return Database.getQueryLocator([
                     select SLA__c from Account where SLA__c in ('Gold','Bronze')
                ]);
	}

	global override void execute(Database.BatchableContext btx, List<SObject> scope) {
		List<Account> accs = (List<Account>) scope;
		for (Account a : accs) {
			if (a.SLA__c == 'Gold') a.SLA__c = 'Bronze';
			else if (a.SLA__c == 'Bronze') a.SLA__c = 'Gold';
		}
		update accs;
	}

        // The ProcessStep interface includes a complete() method
        // which you should call at the end of your finish() method
        // to allow relax to continue chains of Aggregable Jobs
	global override void finish(Database.BatchableContext btx) {
		// Complete this ProcessStep
		complete();
	}

}

That’s it! As long as you call the complete() method at the end of your finish() method, relax will be able to keep infinitely-long chains of Batch Jobs going. Plus, this framework is merely an extension to Database.batchable — meaning you can still call Database.executeBatch() on your aggregable Batch Apex and use it outside of the context of Relax.

Relax in action

In our org, we have 4 jobs set up: 1 individual, 3 aggregable. To kick them off, we select all of them, and change their Active? field to true. Here’s what it looks like after we’ve mass-activated them:

And here’s what the Scheduled Jobs table (accessible through the Setup menu) looks like. Since Case Escalator was set to be run individually, it has its own job. Then there’s a single “Relax Job Scheduler”, which runs every 5 minutes (I’ll probably drop this down to every 1 minute once I officially release the app), checking to see if there are any aggregable Jobs that need to be run, and running them:

Every time Relax Job Scheduler runs, the very first thing it does is schedules itself to run again — regardless of what happens during any processing that it initiates. It queries the “Next Run” field on each Active, Aggregable Job__c record that’s not already currently being run, and if Next Run is less than now, it queues it up to be run as part of a Relax “Process”. Each Process can have an arbitrary number of ProcessSteps, which will be executed sequentially until none remain. If both the current and next ProcessSteps are BatchableProcessSteps, Relax uses a “Process Balloon” to keep the Process “afloat” — essentially launching a temporary scheduled job that is immediately thrown away as soon as the next ProcessStep is begun.

One-off Jobs

Another powerful feature of Relax is the ability to effortlessly launch one-off, one-time scheduled jobs, without having to worry about cluttering the CronTrigger table with another scheduled job. It takes just 1 line of code, AND you can specify the name of the class to run dynamically — e.g. as a String! Reflection ROCKS!!!

// Schedule your job to be run ASAP,
// but maintain the Job__c record so that we can review it later
relax.JobScheduler.CreateOneTimeJob('myNamespace.AccountCleansing');

// Schedule your job to be run 3 minutes from now,
// and delete the Job__c record after it's run
relax.JobScheduler.CreateOneTimeJob(
    'SwapSLAs',
    Datetime.now().addMinutes(3),
    true
);

Try it out for yourself!

Does this sound awesome to you? Give it a test run! Install the app right now into any org you’d like! Version 1.1 (managed-released)

Please send me any feedback you may have! What would make this more usable for you? I’ll probably be releasing the app within the next month or so, so stay tuned!

When I hear the words “Reports” and “Managed Packages” in the same sentence, I involuntarily let out a grunt of displeasure. Ask any seasoned ISV, and I guarantee you that the same sour taste fills their mouths. Why? Well, here’s the classic problem: An ISV includes some Reports in their managed package. Now, a common trick for making Reports “dynamic” is to leave one of the Filter Criteria blank and then have its value passed in through query string parameters using the “pv<n>” syntax, where n is the parameter you’d like to populate. For example, in this report of Enrollments at a given School, parameter 2 (0-indexed) is left BLANK:

Then, if we load up this page with query string parameter “pv2” set to the name of a School, like so:

the value that we passed in will be dynamically inserted into the 2nd Filter Criterion, and we’ll have a report on Enrollments at Lincoln High School:

This is awesome, right? Quick, let’s throw a Custom Link on the Account Detail Page called “Enrollments” that links to this report, but passing in the Id of the Report! Yeah, yeah, yeah! I love this!

Hold your horses, partner.

This is where the ISV’s hang their heads in sadness… sorry son, it just ain’t that easy.

What’s the matter, grandpa? Come on, this is child’s play!

Not quite.

Notice where we said that we’d be passing in the ID of the Report. Hard-coded. For an ISV, ID’s are the ultimate  taboo. Why? Well, sure, you can package up the Report and Custom Link. But as soon as you install the package into a customer’s org, the Id of the Report has CHANGED — and the link is BROKEN. It’s a situation that the typical one-org Admin will never have to face, but, well, welcome to the world of the ISV.

Isn’t there one of those handy Global Variables which lets you grab the Name or DeveloperName of a Report?

Nope, sorry partner.

So, what DO you do?

Well, you write a ‘ViewReport’ Visualforce page that takes in the API name of a Report — which does NOT change across all the orgs that a package is installed into — and uses this API name to find the ID of the Report and send you to it. What does this look like?

The Visualforce is simple — one line, to be exact:


<apex:page controller="ViewReportController" action="{!redirect}"/>

The Apex Controller is a little more interesting. Here’s the meat, with test code included that achieves 100% coverage (so you can start using it right away!!!):


public class ViewReportController {

    // Controller for ViewReport.page,
    // which redirects the user to a Salesforce Report
    // whose name is passed in as a Query String parameter

    // We expect to be handed 1-2 parameters:
    // r: the DEVELOPER name of the Report you want to view
    // ns: a salesforce namespace prefix (optional)
    public PageReference redirect() {
        // Get all page parameters
        Map<String,String> params = ApexPages.currentPage().getParameters();

        String ns = params.get('ns'); // NamespacePrefix
        String dn = params.get('dn'); // DeveloperName

        List<Report> reports;

        // If a Namespace is provided,
        // then find the report with the specified DeveloperName
        // in the provided Namespace
        // (otherwise, we might find a report in the wrong namespace)
        if (ns != null) {
            reports = [select Id from Report
                  where NamespacePrefix = :ns
                  and DeveloperName = :dn limit 1];
        } else {
            reports = [select Id from Report where DeveloperName = :dn limit 1];
        }

        PageReference pgRef;

        // If we found a Report, go view it
        if (reports != null && !reports.isEmpty()) {
            pgRef = new PageReference('/' + reports[0].Id);
            // Add back in all of the parameters we were passed in,
            // MINUS the ones we already used: ns, dn
            params.remove('ns');
            params.remove('dn');
            pgRef.getParameters().putAll(params);
        } else {
            // We couldn't find the Report,
            // so send the User to the Reports tab
            pgRef = new PageReference('/'
                + Report.SObjectType.getDescribe().getKeyPrefix()
                + '/o'
            );
        }

        // Navigate to the page we've decided on
        pgRef.setRedirect(true);
        return pgRef;

    }

    ////////////////////
    // UNIT TESTS
    ////////////////////

    // We MUST be able to see real Reports for this to work,
    // because we can't insert test Reports.
    // Therefore, in Spring 12, we must use the SeeAllData annotation
    @isTest(SeeAllData=true)
    private static void Test_Controller() {
        // For this example, we assume that there is
        // at least one Report in our org WITH a namespace

        // Get a report to work with
        List<Report> reports = [
            select Id, DeveloperName, NamespacePrefix
            from Report
            where NamespacePrefix != null
            limit 1
        ];

        // Assuming that we have reports...
        if (!reports.isEmpty()) {
            // Get the first one in our list
            Report r = reports[0];

            //
            // CASE 1: Passing in both namespace, developername,
            // and a parameter value
            //

            // Load up our Visualforce Page
            PageReference p = System.Page.ViewReport;
            p.getParameters().put('ns',r.NamespacePrefix);
            p.getParameters().put('dn',r.DeveloperName);
            p.getParameters().put('pv0','llamas');
            p.getParameters().put('pv2','alpacas');
            Test.setCurrentPage(p);

            // Load up our Controller
            ViewReportController ctl = new ViewReportController();

            // Manually call the redirect() action,
            // and store the page that we are returned
            PageReference ret = ctl.redirect();

            // We should be sent to the View page for our Report
            System.assert(ret.getURL().contains('/'+r.Id));
            // Also, make sure that our Filter Criterion values
            // got passed along
            System.assert(ret.getURL().contains('pv0=llamas'));
            System.assert(ret.getURL().contains('pv2=alpacas'));

            //
            // CASE 2: Passing in both just developername
            //

            // Load up our Visualforce Page
            p = System.Page.ViewReport;
            p.getParameters().put('dn',r.DeveloperName);
            Test.setCurrentPage(p);

            // Load up our Controller
            ctl = new ViewReportController();

            // Manually call the redirect() action,
            // and store the page that we are returned
            ret = ctl.redirect();

            // We should be sent to the View page for our Report
            System.assert(ret.getURL().contains('/'+r.Id));

            //
            // CASE 3: Passing in a nonexistent Report name
            //

            // Load up our Visualforce Page
            p = System.Page.ViewReport;
            p.getParameters().put('dn','BlahBLahBlahBlahBlahBlah');
            Test.setCurrentPage(p);

            // Load up our Controller
            ctl = new ViewReportController();

            // Manually call the redirect() action,
            // and store the page that we are returned
            ret = ctl.redirect();

            // We should be sent to the Reports tab
            System.assert(ret.getURL().contains(
                '/'+Report.SObjectType.getDescribe().getKeyPrefix()+'/o'
            ));

        }

    }

}

And here’s an example of using this code in a Custom Link:

Voila! A ISV-caliber way to link to any Report in any Managed Package, and it won’t break in an org where the package is installed!

The basic flow of the Apex is pretty simple: the redirect method gets called immediately upon page load, and it returns a page reference to redirect the user to. So all that Apex needs to do for us is find the Report with the provided API name / DeveloperName (and optionally in a specific namespace), and send us to /<Id>, where <Id> is the Id of that Report. Pretty straightforward. Just a few interesting points:

  1. We ‘tack-on’ to the resultant page reference any OTHER page parameters that the user passed along, so we can pass in any number of dynamic Filter Criteria parameters using the pv<n> syntax.
  2. You may be wondering — wait, you can QUERY on the Report object? Yep! Reports are technically SObjects, so you can query for them, but they fall under the mysterious category called “Setup” objects which, among other peculiar quirks (Google “MIXED_DML_OPERATION” for one of the more annoying ones), only selectively obey CRUD and only expose some of their fields to SOQL. Fortunately, for our purposes, the Id, Name, DeveloperName, and NamespacePrefix fields are all included in this short list. Actually, fire up SOQLXplorer — you might be inspired by some of the other fields that are exposed.
  3. Namespacing of Reports — Reports included in Managed Packages don’t have to have a globally unique name — they only have to be unique within their Namespace. Therefore, when querying for Reports, it’s best to query for a report within a particular namespace.
  4. If the Report is not found — in our example, we send the User to the Reports tab. You might want to do something different.