Deep Thought is Essential for Productive and Satisfying Deep Work

Over the past two years, the world has felt like a rat race: Meetings starting 05:00 – 06:00 almost every day (at least three times a week), near-endless emergencies and fires to put out all while stressing over the piles and piles of work on top.

Needless to say, it has been extremely difficult to feel productive in 2017 & 2018. A lot of important work has been completed… and yet it hasn’t *felt* productive – and surely hasn’t been satisfying. Why? There are several reasons, but one of the kickers is because I scratched away my time for deep thought… and as a result, deep work too.

With the recent departure from my old company, Socrates AI Inc., I found something I lost a couple years ago – the time and ability for deep thought. Deep thought is satisfying, and it’s essential for productivity (mine, anyway). There are a lot of reasons why this happened, none of which I’ll share here ;-) but the fact is that deep thought is essential for my own productivity and happiness. It cannot be foregone for large lengths of time without losing something else along the way.

What is Deep Thought?

Deep thought is taking time out of your day to focus exclusively on important thoughts and problems – past, present, and future. It’s shutting down distractions like social media, emails, and notifications for a chunk of time to reflect on what you’re working on, where you want to go and how you want to go about it. I used to do a lot of productive deep thought a few years ago, and my days then felt very productive – and also very satisfying.

A lot of Software Developers, Engineers, Artists and Creatives NEED deep thought. It’s essential. From what I’ve seen, a lot of managers and executives forget that their team also needs this deep thought to achieve deep work, to feel productive, and also to feel satisfied about their work. It’s important to give your team space to work, but it’s equally important that you give YOURSELF the space you need to think and work. You are your own manager. You are your own executive. Don’t be “that guy” or “that girl” to yourself… give yourself the time for deep thought and deep work that you and your mind crave.

On the Metaverse…

I’m definitely aboard the VR Bandwagon – and the possibilities are pretty huge… but it all requires saturation, and that’s going to take a bit of time.

The long bit:

When VR was first starting its mainstream debut, I thought it was nothing but a fad. I dismissed it as such, saying the graphics weren’t there, the hardware wasn’t ready, there were no actual use-cases, etc.
But a few years ago I got the chance to try out a headset and as soon as I did it really opened my eyes – and my mind – to the possibilities of the future.

When I first tried on that headset, I knew immediately this could shakeup the Theatre industry.
A 60-foot screen in every household? Movies on-demand without any advertisements? No rude tween moviegoers?? Plus unlimited popcorn from your own personal kitchen just a few feet away…?! If nothing else, I wanted one at least for that.

Aside from reducing the Hardware requirements and increasing comfortability, the next step will be to make that experience more social. Facebook/Oculus has already started on that stretch (a long with some smaller startups who developed products beforehand). It’s been a dream for my brother and I to be able to watch movies together despite being thousands of KMs apart, or in different countries altogether. Virtual Reality can actually make that happen. I enjoy watching movies with family and friends, I’d love to be able to do that remotely.

Over time, development tools will be added and that’s what will truly start the dot-com boom of the VR World… but more importantly, it plants the seed for a new universe to take hold, one that is Virtual in nature. One where anyone is able to create their own basic space with customizations, and connect it to the new Virtual Universe to share with others. As people improve their development and design trade skills, a natural market will develop for services, hosting space, infrastructure and technical plumbing.
I think when this happens, we’ll have a new information highway… A new form of data exchange built ontop of the Internet, and a new parallel universe of our own creation that mimics our own.

Because it opens up so many possibilities for sharing information, expressing ourselves, and finding Happiness.

Within my lifetime, I think people will be able to work from within VR… I know I want to work from within a VR-Verse…
Think of coding, or writing a book, beside a calm stream in a beautiful forest, or on the side of a mountain overlooking a fjord. Your office becomes anywhere you want it to be…
Content creators will be in dire need – photographers to capture the scenes people want, UI/UX Designers to build new interfaces for this VR Universe… even industries that are suffering today may find hope in a VR-Verse. Who will report the news? VR Journalists? It’s a quirky thought, but one that may just be within the grasp of virtual reality…

The social impacts, of course, are great as well.
Will most of the people in the early VR-Verse be AIs / Virtual Robots?
Just assistants to help guide humans along their new paths?

Will people become more reclusive in the “real” world and depend more on the VR World?
Or will it open people up to understand that humans elsewhere are still humans?
Questions that will be answered in due time I’m sure… but I do see hope in this area, especially around early therapeutic uses. Group therapy sessions could be extremely helpful in VR. Scenarios can be developed to help people overcome challenges.

Most of the VR scenarios rely on a very important thing: Saturation.
It requires a lot of content, it requires people, it will probably need seamlessly integrating spaces or worlds together.
It requires flexibility and social connections.
A VR-Verse is only as big as the community around it.
The social tech will be the glue that holds that universe together – without market saturation, or at least a large number of people building or connected on the platform, it will fall apart.
Baby steps are important here… The universe wasn’t built in a day…

From the ashes…

From the ashes comes a new blog!

It’s been a really long time since I shut my old blog down… With the New Year right around the corner and imminent changes with my career, I figured now would be a good time to bring up a new blog to share insights into research, services and configurations. Learning can be fun! And can be shared.

A lot has changed over the past 10 years, and we’re going to see a lot more within the next 10.
Let’s take this journey together, develop new systems, destroy some old ones and improve our technology :-)

Welcome to From the Ashes: Tech Bl㊋g

Bouncing email in OSX Mountain Lion (10.8)

In Snow Leopard, the Mail app had a seldom-used, but very useful (to me), feature called “Bounce”.

Basically, when you bounce an email it sends an automated message to the sender stating that an error occured and their message was not delivered. This was often useful to send spam back to the sender, informing them your email address “isn’t valid”.

This feature was removed in Lion, but it can be easily recreated by using Apple Script. The original source tested it in Lion & Mountain Lion. I’ve confirmed it in Mountain Lion (10.8.2 as of this writing), but haven’t tested in 10.7.

Steps to recreate “Bounce”:

1. Open the automator app
2. Make a new service
3. In the main window (right-column), select “no input” in “Mail” for the ‘Service receives’ selectbox
4. Drag “Get Selected Mail Items” into the workflow from the pane on the left
5. Drag “Run Applescript” into the workflow
6. Insert the following code:

on run {input, parameters}
   tell application "Mail"
      repeat with eachMessage in input
         bounce eachMessage
      end repeat
   end tell
end run

7. Save as “Bounce” (or whatever you’d like to call it)

The service is now created, so to test it:

1. Open Mail
2. Select a message (probably a good idea to send yourself a message so you can verify the response!)
3. Click the “Mail” menu up top –> “Services” –>”Bounce”. A bounced notification will be sent to the sender.

Original Source

Migrate AWS EBS Linux Instance from Region A to Region B

Recently I’ve had to migrate one of my AWS instances to another region, and unfortunately Amazon doesn’t provide an easy way to do this. I found a few ways to do it, but some of them didn’t work (notably, I had issues using rsync across regions. Data transferred ok, but I couldn’t connect to the new instance).

The method that did work for me was located at: – I didn’t have any issues with this process, so it’s documented here for future reference.

Best thing about doing it this way is you don’t need to mount the volume on either the source or the destination server from the operating system, and there is no file system creation required on the destination volume.

Here are the simplified steps to do it, but if you’d like more info please visit the original source – linked above.

Set up the source instance:

1. Start up a micro instance in the source region
2. Create a snapshot of the instance you want to transfer
3. Create a volume from the snapshot
4. Attach the new volume to /dev/sdf
Note: If you’re using an Ubuntu version > 10, the volume may be renamed… beware of that.

Set up the destination instance:

5. Create a micro instance in the destination region, use the same key pair used with the source instance (AWS allows importing an existing keypair)
6. Create an empty volume and attach it to /dev/sdf
Note: If you’re using an Ubuntu version > 10, it may be renamed

Add the necessary SSH key to the source instance

7. SSH into the source instance, add the private key to the instance

vim default_keypair.pem;
...Paste your private key and save it
chmod 600 default_keypair.pem;

8. Test SSH from the source to the destination (this will add the destination’s IP to the known_hosts file)

ssh -i default_keypair.pem ubuntu@ec2-dest-ip-address;

Transfer the data from source to destination

9. Do the actual data transfer, this could take upwards of an hour to complete.

sudo su;
dd if=/dev/sdf | gzip -c -1 | ssh -i default_keypair.pem ubuntu@ec2-dest-ip-address "gunzip -c -1 | sudo dd of=/dev/sdf";

After the transfer is complete, something like the following will be output to your terminal:

16777216+0 records in
16777216+0 records out
8589934592 bytes (8.6 GB) copied, 2861.36 s, 3.0 MB/s
16777216+0 records in
16777216+0 records out
8589934592 bytes (8.6 GB) copied, 2937.77 s, 2.9 MB/s

Set up the new instance

10. Create a snapshot of the volume you just populated with data
11. Create an AMI from the snapshot
– make sure in the dialog you use the correct Architecture as the original AMI and give it a name.
– You should be able to use the default options for the rest.
NOTE: In the case that the same AMI doesn’t exist in the source region, start up a new AMI as close as possible to the source and copy the Ramdisk ID and Kernel ID from the running instance

Finish up / tidy up

12. Finally, start up a new instance from your custom AMI image and test that it works properly
13. Once you’ve verified the transfer went well, terminate your receiver and source instances if you don’t need them anymore.

Kill a linux process by name

The kill command is used to kill a process using its pid, but there are times when you need to kill the process by its name.

killall “command” is usually used to do this, but sometimes there are issues where it returns “No matching processes belonging to you were found”. Killall isn’t the greatest at finding the correct process name (especially when your process is being used by an interpreter).

NOTE: Keep in mind that “ps ax” and “ps aux” return two different command names, a long form and short form respectively.

I ran into this issue, so I wrote the following to find the process I needed to kill much easier.

kill -15 `ps ax | grep “[0-9] <command>” | awk -F” ” ‘{print $1}’`

NOTE: You must replace <command> with the command you like.

This gets the specific pid for your chosen command and passes it to the kill application.

Specifically, I needed to kill a nodejs script that was being interpreted by the node interpreter.

This had to be done in a crontab every X minutes. Killall wasn’t finding the process on Ubuntu, but it seemed to be working on OSX so I wrote this method that worked on both OSs.

kill -15 `ps ax | grep “[0-9] node bin/server.js” | awk -F” ” ‘{print $1}’`

Transparent Facebook XFBML “Like” button


You want to add a Facebook “Like” button to your website via XFBML, but you use non-white background. Maybe a particular colour, or an image that repeats itself.
By default, the “Like” button is rendered in an iframe with a white background.


Use CSS to make the iframe’s background transparent.


XFBML “Like” button.
NOTE: I enclosed the XFBML in my own div that I will apply my custom style to.

<div class="fb-like">
    <fb:like href="" send="true" layout="button_count" width="90" show_faces="false"></fb:like>

CSS to set the transparency of the iframe:

.fb-like iframe {
    background: transparent;

Bam! Sexy-looking Facebook XFBML Like button that doesn’t break my site’s layout.

Check it out in action at

Before (note ugly white-background)

After (transparent background)


Virtualizing your workspace

Over the years I’ve gone through many OS updates, software upgrades, hard drive reformats, hard drive crashes… you name it. Each time, even with backups, it’s always a pain to have to reinstall all the server software, components, dependencies, etc.

Enter virtual machines.

For those of you unfamiliar, a virtual machine is exactly what the name implies: virtualized hardware (machines) that allows you to run multiple operating systems at the same time. For more information, see

Virtual machines aren’t new at all and I have used them for many of years for various tasks (testing new software, test / tracing viruses without infecting my host computer, network testing, and gaming to name a few), but previously I hadn’t used them that much for my web development work (aside from browser testing). I’ve tried both Parallels and VMWare Fusion and at the moment don’t have a preference, however, currently I use VMWare Fusion.

My setup:

  • Host OS: OSX
  • Guest OS (VM): Ubuntu 64bit
  • Web server: Lighttpd, PHP w/ FastCGI and MySQL

I set this up awhile ago and so far it has worked great.

The benefits are many:

  • Anytime I perform a host OS upgrade, I just need to copy my virtual machine to my local disk and my development server is ready for work.
  • I can copy my virtual machine to any number of computers and run the same environment regardless of whether I’m using my desktop or my laptop.
  • Virtual machines can be used regardless of the host operating system without affecting my server applications or impacting my work
  • Since I keep regular “snapshots” of my virtual machine, I can easily roll the virtual machine back to a previous state if there are issues with upgrading the guest OS… all without impacting my host OS.
  • Combining a virtual server with Git makes sure any project I need can be easily retrieved and updated, regardless of where I am.

In the end, the most valuable thing it gives me is a ton of time saved and as much redundancy as I want.

Cybersource SOP Response Handlers from Multiple Domains

I noticed quite a few people were having issues with this (including myself), and it wasn’t clearly mentioned in the documentation. If it helps you, please leave a comment!

1. You use Cybersource as a payment processor.
2. You’re using Cybersource’s SOP option.
3. You have multiple domains that use Cybersource, but you can only specify a single URL as your response handler (a page that handles the decline/receipt response). You need to be able to send people to the appropriate domain, based on where they came from (that is, if you’re coming from Domain A you should be sent back to Domain A).

Answer: The following input fields allow you to override the response URL configured in the Test Business Center. This allows you to add your own custom response page, based on the location you’re coming from.

<input name="orderPage_receiptResponseURL" type="hidden" value="" />
<input name="orderPage_declineResponseURL" type="hidden" value="" />
<input name="orderPage_sendMerchantURLPost" type="hidden" value="true" />
<input name="orderPage_merchantURLPostAddress" type="hidden" value="" />

A little more in-depth, for the curious

What is Cybersource?

Cybersource, [is] a leading provider of Credit Card Processing for Business, Electronic Payment & Risk Management Solutions also provides solutions to enable electronic payment; avoid online credit card fraud and credit card processing for Web, Call center & POS environments.

What is SOP?

SOP (Silent Order Post) is a feature that Cybersource provides that allows you to provide an online credit card form and submit the information to Cybersource for processing.

Let me give you a tiny bit of background on how the flow works (not going into any specifics):

1. User inputs information in your form
2. User submits the form… the form is sent directly to Cybersource
3. Cybersource processes the information, and sends the user back to a “response handler” (URL) the developer specified in the business center
4. The response handler displays an error message or receipt based on the response from Cybersource

Most people seem to only think that the response handler must be configured in the Business Center and I actually thought this as well but after scouring their documentation and trying various methods, I found that these fields may be customized within your form data.

For more information, feel free to check the following links.

Information about SOP

SOP User Guide (HTML format)

How to install PEAR, PHPUnit, Testing_Selenium PEAR package, and Selenium RC in Windows Vista:

What is PHPUnit?

  • PHPUnit is a testing framework to create and run automated unit tests for your web application.
  • PHPUnit helps test the back-end of your web application by running through certain parts (units) and test them, comparing them with the expected output

What is Selenium?

  • Selenium is a suite of test tools to help test the front-end of your web application. The front -end being the visual aspect, and browser compatibility.
  • Using Selenium you can create automated tests to ensure your product is less prone to errors

While trying to install Selenium, I never found a resource that had a simple install guide with everything I needed. That being said, I hope this can help someone who needs to install everything – and if it doesn’t, well at least it will help me if I need to install them on another computer ;)

This post assumes PHP is installed (I used 5.2.5) in C:\php\ (change the directory where applicable).


The first thing you’ll need to do is to install PEAR. To do this, Open up your command prompt. You can do this by:
clicking the start button
typing “cmd” (without quotations) in the search field
Click the “cmd.exe” application or Hit ENTER if that’s the only one there

In the command prompt, go to your PHP root directory. Mine was located at C:\php\
cd C:\php\

Run the go-pear.bat file with the command prompt. This batch file will install PEAR in C:\php\PEAR\

Next, you’ll want to install PHPUnit. To do this, type the following in the command prompt:
pear channel-discover
pear install phpunit/PHPUnit

The first command registers the channel with the local PEAR environment – basically it lets PEAR know where to get PHPUnit from. The second command downloads and installs PHPUnit.

After it installs, you’ll be able to find it in C:\php\PEAR\PHPUnit\

Next, you’ll want to download the Selenium package using pear. As of this writing, the newest version is 0.4.3. In the command prompt, type:
pear install Testing_Selenium-beta

At this time, if you don’t have Java installed, you should install it now:
To check if you have java installed, in the Command prompt type “java -version” (you need JRE 1.5 or higher).

After that, you can install Selenium RC. I downloaded the most recent at this time; Version 1.0 beta 1:
Create a new folder called “selenium” somewhere easily accessible by the command prompt.
Inside the folder you downloaded there’s a Selenium-server folder (Mine was called “selenium-server-1.0-beta-1”). Move all the files in the Selenium-server folder into the “selenium” folder you created, mentioned above.

In the Command prompt, go to the directory that contains the Selenium server.

To start the server in interactive mode (allows you to control the browser manually), type the following in the command prompt:
java -jar ./selenium/selenium-server.jar -interactive

And finally everything is set up to create and run tests!

To create tests, you can use the Firefox plugin IDE to record actions. Alternatively, you can code them yourself.

If you use the IDE: After you create a new test, export it as PHP to an easy-to access location.
In the command prompt change your directory to the location of your new test.
use the following to run the test using PHPUnit:
phpunit testname.php

Assuming you’ve done everything correctly, PHPUnit will read the test file and tell your running Selenium RC server to open a browser. From there, it will go through your test and give you a report!

Note: This post was originally published on my old blog on August 28th, 2008 and was re-posted here for safe-keeping.