Book Review: The Essence of Camphor by Naiyer Masud

The Essence of Camphor is a collection of short stories by Naiyer Masud, considered one of the foremost Urdu short-story writers in India. This collection contains English translations of ten of his stories.

This is not the sort of book I would have picked up on my own. It was Pratul who urged me to read it, comparing Masud’s style to that of Haruki Murakami. While Masud and Murakami write about completely different people in completely different cultural contexts, I feel Pratul’s comparison is not entirely inapt. There are many parallels between the works of the two writers, which is not surprising considering both of them are strongly influenced by Kafka.

Masud’s stories are surreal and dreamlike, and as one would expect, they don’t conform to traditional narrative structures. Many of them are told in a stream of consciousness style by narrators who appear to be reminiscing about their early years. Themes of childhood and family life in old India are prominent throughout. Someone on Goodreads called them “mood stories”—stories with the sole purpose of evoking a sense of time and place, or I suppose the lack thereof. In other words, Kafkaesque. Some of them are vaguely terrifying (Obscure Domains of Fear and Desire, The Woman in Black), others are sorrowful (The Essence of Camphor, Nosh Daru), all of them are beautiful. Masud captures the sights, sounds and smells of old Lucknow in such vivid detail that you almost start reminiscing about the good old days yourself.

Beautiful as they are, these stories are also inscrutable. I rarely read the introduction to a book before reading the book itself, but I’m glad I broke my rule this time. Muhammad Umar Memom—who wrote the introduction and translated some of the stories—has this to say about Masud’s work:

[…] reading Masud’s stories evoked the sensation of being thrown headlong into a self-referential circularity.

Which I interpret to mean: don’t think too hard kids, just enjoy the ride. And what a ride it is.

To be honest, I often felt frustrated with this collection. I would not recommend reading it from beginning to end in one sitting. This is a book best consumed slowly, over a span of many weeks. Despite my frustration, some of these stories have left a deep impression on my mind.

If you enjoy Murakami and/or Kafka, I’d highly recommend picking up a copy of The Essence of Camphor. If you’re not into surrealism, keep away.

My Reading List for 2014

The hardest thing to do after finishing a book is deciding which one you want to read next. Things become harder still when you and your roommates collectively own hundreds of books, most of which have been on your to-read list for years. And if you own an e-reader with even more books on it—you see where I’m going with this, right?

This is why I have made a reading list for this year, and it looks something like this:

Update: Since I couldn’t find an electronic version of Ryu Murakami’s Almost Transparent Blue, and the physical version was too expensive in India, I decided to replace it with Popular Hits of the Showa Era.

2013: Year in Review

The Good

The Bad

The Ugly

The Highlights

Unlockments Achieved

What Next?

There were several goals that I couldn’t achieve in 2013. I’d like to tackle them again this year.

Besides these leftover goals from last year, I have some new goals for this year.

Have a happy new year, folks. Make it one worth remembering!

Loading Spinners With AngularJS and Spin.js

Spin.js is a tiny JavaScript library that helps you create beautiful loading spinners on every web browser up from IE6. It is highly customizable, fast, and has zero dependencies. I’m using it in my AngularJS application to display loading spinners inside my ng-views while my REST API responds with the data the view needs to render itself.

I add a viewLoading boolean to the $scope of each controller that talks to the REST API. The initial value of viewLoading is true.

angular.module('MyApplication')
  .controller('MakesTonsOfAPICalls', function($scope) {
    $scope.viewLoading = true;
  });

After all the API calls complete successfully, I set viewLoading to false.

angular.module('MyApplication')
  .controller('MakesTonsOfAPICalls', function($scope, MyLargeModel) {
    $scope.viewLoading = true;

    // Grab all MyLargeModel objects.
    MyLargeModel.get({}, function(result) {
      // Do something with the result.
      $scope.viewLoading = false;
    });
  });

If I have to make multiple calls, I use the $q service to create a promise for each of them. Each promise is resolved or rejected depending on the status code that the API call returns. I then use $q.all() to call a function when all of the promises have been resolved. This function sets viewLoading to false. I will talk more about $q in another post, but here is a rather simplistic example for now:

$q.all([promise1, promise2 ... promiseN]).then(function(data) {
  $scope.viewLoading = false;
});

I want the loading spinner to be displayed for as long as viewLoading is true, and be replaced by the actual view content as soon as viewLoading becomes false. I use a directive to do this. This is what the markup looks like:

<div ng-controller="MakesTonsOfAPICalls">
  <div my-loading-spinner="viewLoading">
    <!-- actual view content goes here. -->
  </div>
</div>

And this is what the directive looks like:

angular.module('MyApplication')
  .directive('myLoadingSpinner', function() {
    return {
      restrict: 'A',
      replace: true,
      transclude: true,
      scope: {
        loading: '=myLoadingSpinner'
      },
      templateUrl: 'directives/templates/loading.html',
      link: function(scope, element, attrs) {
        var spinner = new Spinner().spin();
        var loadingContainer = element.find('.my-loading-spinner-container')[0];
        loadingContainer.appendChild(spinner.el);
      }
    };
  });

For this to work correctly, the Spin.js code has to be loaded before the directive code.

The directive is restricted to attributes only and replaces the original content on the page with the content from my template. I set transclude to true so I can re-insert the original content back into the page later. If you look back at the HTML for the view, you will find that the value of the myLoadingSpinner attribute is viewLoading. When Angular encounters our markup, it will create a two-way binding between the loading variable in the directive’s scope and the viewLoading variable in the parent controller’s scope. If you find this confusing, you may want to read about directives on the AngularJS website.

Before I explain the link function, take a look at the directive’s template:

<div>
  <div ng-show="loading" class="my-loading-spinner-container"></div>
  <div ng-hide="loading" ng-transclude></div>
</div>

The markup is simple enough. The div with class my-loading-spinner-container is displayed when loading is true, and hidden if it is false. The second div is hidden if loading is true, and displayed if it is false. The second div also uses ng-transclude to re-include into the page the original content that was replaced by our directive.

Finally, the link function creates a new loading spinner, finds the div with the class my-loading-spinner-container, and puts the spinner inside the div. Hence, the spinner is displayed as long as loading is true, and the actual content is shown when it becomes false, which is exactly what we want.

Simulating a Slow Internet Connection

I am currently working on a single page web application written with AngularJS that communicates with a REST API written with Django and Tastypie. Since I run both the client and the server locally on my machine, every HTTP request that my AngularJS application makes receives a response from the REST API in tens of milliseconds. This is not ideal.

In the real world, Internet connections have latencies that range anywhere from a few hundred milliseconds to tens of seconds. To give my user a smooth experience even on a slow internet connection, I need to ensure that she receives appropriate feedback whenever she performs an action that requires a round-trip to the server. For example, when she navigates to a view that requires a large amount of data to be fetched from the server, my application needs to display a loading spinner of some sort on the screen to indicate progress. I cannot have the UI be completely blank for the time it takes my API to respond to the HTTP request.

Unfortunately, if I run the application locally, it becomes impossible for me to test my progress indicators. The request-response cycle completes so quickly that they are replaced by the actual content within a split second.

After searching the Internet in vain for a solution that would let me simulate a “real” Internet connection from within my browser, I wrote a Django middleware that uses time.sleep() to delay each HTTP response that my application returns by 0 to 4 seconds.

import random
import time

class SlowPony(object):
    def process_response(self, request, response):
        time.sleep(random.randint(0, 4))
        return response

Then I added this middleware to my MIDDLEWARE_CLASSES:

MIDDLEWARE_CLASSES = (
    # ...
    'my_application.middleware.SlowPony',
)

I don’t like this solution. For one, this does not cause any of the requests to time out, which happens frequently on mobile Internet connections. It’s better than nothing, though.

I find it surprising and disappointing that neither Firefox nor Chrome let me simulate slow Internet connections via their developer tools. Fast, reliable, low-latency Internet connections are a rarity, especially since a large and growing number of people browse the web using mobile Internet. This situation is unlikely to change for several years in the future, and tools to test our web applications in such scenarios are either incomplete or non-existent.

An Introduction to CMake

GameDev.net recently published a four-part series on writing cross-platform build systems with CMake. The series first covers the very basics of CMake, followed by a tutorial on how to add unit-tests to your codebase using googlemock. Parts 1, 2, 3, 4.

(Edit: with the release of CMake 2.8.11, a fifth part was recently added.)

I consider myself lucky that I don’t have to work with C++ code very often. It’s not that I dislike the language, it’s just that I dislike working with build systems. All build systems are terrible and, much worse, poorly documented. I can never figure out how to accomplish the simplest of tasks with any of them. CMake happens to be the least bad of all build systems I’ve had the profound displeasure of having used, and this series of tutorials is the best I’ve encountered on the use of CMake.

All About Iteration

Bob Nystrom’s two-part blog post about iteration in programming languages includes perhaps the clearest explanation of coroutines I have read so far. It begins with an exploration of how iteration is implemented in mainstream programming languages, and goes on to talk about internal and external iterators, the yield statement in C# and Python, the callstack, coroutines in Ruby (or fibers, as Ruby likes to call them) and why iteration is another way of thinking about concurrency.

Read Iteration Inside and Out and Iteration Inside and Out, Part 2.

Tastypie and Timezones

If you use Tastypie with Django’s timezone support turned on (i.e, you have USE_TZ = True in your settings.py), you will notice that Tastypie helpfully converts all dates and times in your resources to the timezone specified in your TIME_ZONE setting before returning them. If you care about internationalization, this is not the behavior you want. Tastypie encodes dates and times in the ISO8601 format by default. These dates and times have no timezone information attached to them, which means that the consumers of your API will have no idea how to correctly display them or convert them to other timezones.

This is what ISO8601 datetimes look like:

{
    "end_time": "2013-04-01T06:32:06",
    "start_time": "2013-04-01T00:30:00"
}

Both those datetimes are in the Asia/Kolkata (UTC+5:30) timezone. How can I tell? I can’t. Unless I look in my settings.py. Not cool.

There are two solutions to this problem. First, you could add this line to your settings.py:

TASTYPIE_DATETIME_FORMATTING = 'rfc-2822'

This will cause Tastypie to format dates and times using the format specified in RFC2822. Your dates and times will now include timezone information:

{
    "end_time": "Mon, 1 Apr 2013 12:02:06 +0530",
    "start_time": "Mon, 1 Apr 2013 06:00:00 +0530"
}

The second solution, which is the solution I prefer, is simpler: use UTC everywhere on the server and let your API consumers deal with timezone conversions. Set your TIME_ZONE to "UTC" and sleep easy.

If your API consumer is a web application, I highly recommend using Moment.js for all date and time operations.

Travel Light

Year 2011. Lines of elisp in .emacs: 145. Lines of configuration in .vim: 115.

Year 2013. Lines of configuration in .vim: 0. Lines of JSON in Sublime Text configuration: 0.

This is so much better.

WordPress: Under Siege

As I mentioned in my last post, I recently switched my website from my homegrown Django blogging app to WordPress. Before installing WordPress on my VPS, I installed it on a VM so I could test the waters before jumping in. I created an Ubuntu 12.04 VM using VirtualBox and gave it a gigabyte of RAM to work with. After I had WordPress up and running, I created some test posts and played around with various plugins and themes that I could find on the WordPress directory. I was dismayed to discover that WordPress has terrible performance out of the box, even if you disable all installed plugins. The WordPress dashboard served by my Ubuntu VM would easily take 4–5 seconds to load, and individual posts would take at least 2–3 seconds to load. I found this unacceptable, so I started searching StackOverflow and the excellent WordPress StackExchange for answers.

The two most straightforward performance optimizations that I could find were:

  1. Install a PHP opcode cache.
  2. Install a page caching plugin.

Installing an opcode cache on Ubuntu is easy:

$ sudo apt-get install php-apc

No extra configuration is required on Ubuntu. If you use a different distro, read the php-apc documentation on the PHP website.

Installing WP Super Cache is similarly easy and a number of excellent tutorials for setting it up are scattered around the Web. Here is a good one for Apache, and here is one for nginx. I also recommend perusing this GitHub repository that contains a complete set of configuration files for serving WordPress through nginx.

The Numbers

The numbers that follow are for a fresh install of WordPress 3.5.1 running on an Ubuntu 12.04 VM with 1GB of RAM, served by nginx 1.1.19, php-fpm 5.3.10, and backed by MySQL 5.5.29. The host OS is Mac OS X 10.8.2 running on a MacBook Pro. All testing done with Siege 2.74 hitting different pages of the WordPress website in a random order.

$ siege -d5 -c100 -i -f url_list.txt -t5m

Note that these numbers only reflect a general trend in WordPress performance under load. Real world page load performance depends on many factors, including network latency, page size, whether you’re using a CDN or not, the number of separate JavaScript/CSS/image files per page, etc. The following numbers only indicate how quickly WordPress can push HTML to the client.

Despite the flawed testing methodology, these numbers are useful as rough indicators of the effectiveness of opcode and page caching.

Fresh Install Without Opcode Cache or Page Cache

MeasurementValue
Transactions2710 hits
Availability100.00 %
Elapsed time299.47 secs
Data transferred9.59 MB
Response time8.37 secs
Transaction rate9.05 trans/sec
Throughput0.03 MB/sec
Concurrency75.74
Successful transactions2710
Failed transactions0
Longest transaction9.58
Shortest transaction0.20

Fresh Install With WP Super Cache and php-apc

MeasurementValue
Transactions11833 hits
Availability100.00 %
Elapsed time299.70 secs
Data transferred23.62 MB
Response time0.02 secs
Transaction rate39.48 trans/sec
Throughput0.08 MB/sec
Concurrency0.75
Successful transactions11881
Failed transactions0
Longest transaction0.42
Shortest transaction0.00

Bonus: Numbers for ankursethi.in (this website)

This website uses the same software as my testing VM. The only difference is that it is hosted in somewhere in Germany on a Hetzner VQ12 VPS, and I’m hitting it with Siege from New Delhi in India.

MeasurementValue
Transactions8566 hits
Availability98.73 %
Elapsed time299.54 secs
Data transferred40.03 MB
Response time0.55 secs
Transaction rate28.60 trans/sec
Throughput0.13 MB/sec
Concurrency15.72
Successful transactions8577
Failed transactions110
Longest transaction5.39
Shortest transaction0.38

Closing Words

My test blog went from 39 transactions per second to 9 transactions per second with these two simple performance optimizations, and page load time went from ~8 seconds to 0.02 seconds. This page load time is for users who have not logged in or left a comment. I see a more modest 1.5–2 second load time for logged in users, which is still a 4x improvement. The concurrency number went from 75.74 to 0.75, which is a good thing in this case.

These optimizations should be enough for a majority of low to medium traffic self-hosted WordPress blogs. For more advanced optimization techniques, I recommend reading this excellent article on the New Relic blog.