Week 2: Going to Siberia, Why Weeknotes, and Being a Jerk
I’m writing this from Delhi, where I’m visiting my family for a few days before I leave for Russia to attend DevFest Siberia 2018 as a speaker. My talk is about using Rust and WebAssembly to draw fractals in the browser. I’m really excited, not just because Rust is amazing and WebAssembly is amazing and being able to use both of them together is amazing, but also because this will be my first talk outside of India!
My throat, however, has still not fully recovered. I’m scared that I won’t be able to speak for 45 straight minutes without hurting myself badly or lapsing into a coughing fit. I’m going to a new doctor tomorrow and hoping for a miracle. Fingers crossed.
After I published last week’s post, a friend asked me why I wanted to publish these weeknotes on the Internet for everyone to see. Taking time to introspect is helpful, putting your thoughts down in writing is also helpful, sharing them with close friends and family is perhaps also helpful, but why put them up for strangers to see?
That question doesn’t have a single answer.
First, I enjoy the conversations that happen as a result of me publishing something on my blog. It’s powerful, to connect with another human being simply by the virtue of typing up whatever I’ve been thinking about lately. I don’t have a vast army of fans hungering to read my next piece, but the five or six people who click through to my posts from Twitter usually end up talking to me, which is reason enough for me to continue writing.
So far, almost everything I’ve published online has been technical. I suppose these weeknotes are also an attempt on my part to break away from that kind of writing, to flex writing muscles I haven’t flexed since high school.
Third, it’s fun to have this little space online where I can just type and not have to worry too much about tailoring my words to a specific audience. I enjoyed the old-school blogging culture of a decade ago, which was what passed as social media back then. People wrote meandering posts about what they were cooking, their favorite coffee places, or how their dog walked all over their favorite rug with muddy paws that day. That kind of stuff now happens on Instagram, Twitter, or Facebook. Social media is fun, but it doesn’t quite afford me the space to think out loud in the way I’m doing right now.
Fourth, my weeknotes give me a chance to practice saying “I don’t know”, or “I’m struggling with this”, or “I don’t feel so great” until I have an easier time saying these things.
Fifth, maybe someday some of this will help somebody else?
Sixth, it gives me a feeling of accomplishment, of having made something. I know it’s nothing that has much value, but hey. It gives me a certain satisfaction.
I can probably go on about this, but I’m going to stop now. It’s been a long day and I’m about to fall asleep at the keyboard.
I’ve recently made a lot of progress on some of my creative projects because I’ve started dedicating an hour and a half every morning exclusively to them. During this time, I disconnect completely from all means of communication and focus solely on my work. I sometimes feel like a jerk when I turn my phone back on and find frantic messages from people who have been trying to get in touch with me, but nothing world-ending has happened yet. I’ll continue being a jerk for the foreseeable future.
I’ve started doing a similar thing, to a lesser extent, for the work-related writing I’ve been doing at Uncommon. It seems to be working, because I’ve already finished writing one blog post that I’m going to publish next week! I’m probably going to start leaving the office to do this, and sit at a nearby cafe with a coffee in order to get some thinking space.
In the next few days I’m coming up with a concrete plan for my client outreach efforts at Uncommon, and I’ll have more to say about it in my notes next week. I also have a few thoughts about the different ways I’ve failed at doing my job properly this year, but that’s another thing I need to sit on before I can write about it.
Until next time, Ankur.
Week 1: Health, Getting New Business, and Hip-Hop
I recently discovered Weeknotes and now I’m compelled to try writing them myself. The idea of reflecting and thinking out loud in public is fascinating.
I’ve been sick a lot this year. My current bout of sickness started when I came down with a bad cough that lasted three weeks. After I got better, I went right back to working long hours, going out, and staying up far too late. The infection never really went away completely and has now developed into some sort of an injury in my throat? Serves me right for not listening to my body.
On Monday I sent my client an email telling them that I have to quit working on their project because my brain can’t figure out how to write an if
statement anymore. This is the first time in my career that I’ve walked away from a project for any reason. I understand that I genuinely needed to rest and heal, but I still feel pretty garbage about this whole situation.
I still can’t talk for too long without pain. Funny, because I’m speaking at ReactJS Bangalore next Saturday. Fun times.
Kids, take care of your body.
These last few months I’ve been thinking a lot about how to drum up new business for the Web Engineering team at Uncommon. So far, most of our new work has come to us serendipitously. Uncommoners been active in different technology and design communities in India for years, and the networks we’ve built keep sending new clients our way.
While I’m thankful for all the amazing people we get to work with, relying on the same networks all the time means the kind of work we get to do is not as varied as I’d like it to be. More than that, an over-reliance on existing networks leaves us helpless in the face of dry-spells, since we have no idea how to effectively reach people outside of our circles.
I say we, but really I mean just me.
I don’t have a repeatable strategy for finding new work, and this year has been all about figuring that out. I’ve tried a few things and learned a few things, mostly about what works for me personally. Here is a braindump:
-
Cold email has a very low conversion rate, even when you’re reaching out to people you’ve previously worked with.
-
Social media can help you find work. Do not Twitter uselessly, use it instead to become a Thought Leader™ and engage in some Growth Hacking™.
-
Creating content that helps someone accomplish something is one of the most effective ways of connecting with people. Think blog posts, books, YouTube tutorials, livestreams, podcasts, conference talks, and workshops.
-
I don’t listen to podcasts, watch livestreams, or look up programming tutorials on YouTube. I do enjoy reading, as well as watching conference talks. I want to create content for people who have similar preferences, instead of putting energy into content I would personally never consume.
-
Writing is the easiest, cheapest, most efficient way to reach people. It’s hard to stand out from the crowd with just writing, but it’s still worth doing.
-
I find writing opinion pieces incredibly hard. Much harder than writing something purely technical. In the short term, I’m planning to exclusively stick to technical content. I’ll try my hands at other kinds of writing when I’ve made a habit out of publishing regularly.
-
Speaking is fun! It’s much more time consuming than almost anything else, but the payoff makes it worth doing.
-
As all creative endeavours, technical writing and speaking will only pay off if you have consistency and a large body of work. Quality is usually a result of consistency and volume.
-
Whether you’re a freelancer or you’re running a consulting firm, you have to make time in your schedule for generating new leads. This is part of your job, and it’s not optional.
I don’t have anything particularly insightful to say about this subject yet, but I will keep coming back to it in the coming weeks and months. It occupies a large part of my attention.
I’ve grown up listening to hip-hop and, like any other hip-hop fan, I’ve tried my hand at writing my own verses. I’ve recorded a few of them and shared them with friends, but it has never been something I’ve taken seriously.
In the last few years I haven’t written much at all, focusing instead on music production with Ableton and the incredible Push 2. But lately I haven’t been able to stop thinking about writing again. Maybe it’s the political climate, maybe it’s the incredible new music coming out of the Indian hip-hop scene, maybe it’s just a phase. Point is, I want to write.
So I’ve started. And this time I’m writing in Hindi.
I’m glad to report that my output is not as corny as I’d expected. Progress is slow, but I’m seeing results and it’s making me very happy.
Until next time, Ankur
E-Commerce Case Study: Building Faster Listing Pages on abof.com (Part 2)
This case study was first published on the Alaris Prime blog on October 6, 2016. You can read the original case study here.
If you haven’t read the first part of this case study, I suggest you go check it out before diving into the second part. It’s a quick read that explains in detail our motivations for the technology choices we made while building the new abof.com.
Done? Great! In this second part, I’ll talk about my and my team’s impressions of the React ecosystem, our opinions on build tooling, and our approach to performance testing.
Learning React
When we started working with abof, all of us were primarily Angular 1.x developers. We had used the framework to build several complex applications, which meant we had the ability to ship quality Angular code rapidly.
However, with a stable release of Angular 2 right around the corner, starting a new Angular 1.x project would have been irresponsible on our part. My experience with building a small application using Angular 2 a few weeks prior to the start of the abof project had left me with mixed feelings. I personally enjoyed working with the framework and found it a welcome improvement over Angular 1.x, but I had to admit that the number of concepts a newcomer to the framework must wrap her head around just to build a functioning TODO list application with Angular 2 was unjustified.
Besides, as I mentioned in part 1, there were other issues with Angular 2 that made it a no-go for us (mainly the large payload size, and the lack of support for universal rendering).
With some apprehensiveness, we began the process of learning React—and what looked to us like a glut of supporting libraries that were apparently absolutely required to produce a working application. There were a number of tools and libraries that we either didn’t understand the purpose of, or didn’t know if we needed. Redux, Radium, Immutable.js, MobX, Relay, Falcor, Flow, Babel, Webpack, just to name a few.
Despite this fractured and confusing landscape, learning React turned out to be easier (and way more fun!) than we anticipated.
We found that there is only one thing that is absolutely required to build React applications: React. Learning it takes a few hours, and—besides the official docs—there are plenty of tutorials on the web that can accelerate and supplement the learning process. I’m partial to the tutorials on egghead.io.
After we wrapped our heads around React, we built a small prototype that pulled product data from the abof.com REST API and rendered it as a grid. No fancy JavaScript preprocessors, no supporting libraries, just plain old ES5 and React. Over the course of a week we added a few more features to this prototype (routing, pagination, infinite scrolling), but it was mostly an experiment that never made it to production.
Having built this throwaway prototype, we were in a position to take a deeper look at the React ecosystem and understand what problems each of the popular libraries was designed to solve. For example, after having spent a day scratching our heads over how to elegantly share state between components, we had a better appreciation of the problems Redux solves.
This exercise let us choose a subset of the libraries that were relevant to us from the plethora of libraries available to us. In the end, the structure of our application was very similar to what a tool like create-react-app would produce, and our list of dependencies was no different from what any standard React application written in 2016 would use. However, by taking a YAGNI approach to building abof, we were able to understand what purpose each of our libraries served at a deeper level. Most importantly, it kept us from getting overwhelmed with new tools and concepts right at the beginning of the project.
Build Tooling
We wrote most of our build system from scratch, adding tools and features as we went. This often caused us pain—for example, adding support for isomorphic rendering after most of the application was already written and functional cost us a few days of development time. We had to rewrite parts of our codebase to make sure they ran correctly on Node.
Our build system did nothing out of the ordinary, but knowing it inside-out gave us the confidence to jump into our Webpack and Babel configurations and tweak things to our heart’s desire. It also helped us automate our release process to a point where building and deploying a new version of the website was a single command.
Would I recommend that every team assemble their build tooling in this piecemeal manner? No. As much as we learned from this exercise, starting with one of the hundreds of available React boilerplates on GitHub and carefully studying its source code would have been a more productive exercise and given us an equal amount of confidence in our tooling.
If you’re starting a new React project now, don’t even think twice about using create-react-app.
Measuring and Optimizing Page Load Performance
Our primary source of insight while measuring page load performance was using the website on real devices connected to real mobile networks. A number of tools exist to simulate different network conditions and spit out numbers, but we found that seeing what our users see on flaky connections and devices was valuable while optimizing our page load times.
WebPageTest and PageSpeed Insights are great for giving you hard numbers to target while building or optimizing your application, and for pinpointing exact areas of your application that need work. However, only by testing on real devices will you know which optimizations directly enhance your users’ experience and which ones shave a few seconds off your loading time without affecting perceived performance in any way.
Our second source of performance metrics was the Chrome developer tools. Even while developing locally, we tested the website with a throttled connection. This pushed us to keep the number of API requests and payload sizes small. We set ourselves a page size quota, which was 150kb of data minified and gzipped. That sounds generous, but we got away with it because we were serving a pre-rendered page to the user.
Our final source of performance metrics was WebPageTest. We ran both WebPageTest and PageSpeed Insights after deploying to our staging server to surface issues we might have overlooked. There are far too many things that can go wrong while building a web application, and automated testing tools serve as—for lack of a better term—interactive checklists that will help you ensure you comply with all the best practices. If it hadn’t been for WebPageTest, we would have never realized that the cache headers on our product images were all wrong, or that we could compress them more aggressively.
Measuring Application Performance
Just like page load performance, our primary source of insight while measuring application performance was using the application on a real device. We had access to a number of low-end mobile phones running Android and Windows Phone, and we would periodically (usually after a staging deploy) test the website on these to make sure abof performed acceptably.
Final Words
This part of the case study was a mostly subjective look at the React landscape. In the last and final part, I’ll talk about some specific issues we ran into while building abof and how we tackled them.
Updates
E-Commerce Case Study: Building Faster Listing Pages on abof.com (Part 3)
This case study was first published on the Alaris Prime blog on January 4, 2017. You can read the original case study here.
Part 1 of this case study was a general overview of how the Alaris Prime team rebuilt abof.com to load almost instantly even on flaky mobile connections, and part 2 was an account of how we got to grips with the often confusing React ecosystem. If you haven’t checked out the first two parts yet, you should do so now.
In this final part of our case study, I’ll discuss a few specific issues we ran up against through the course of the project, and how we tackled them.
Keeping Track of Scroll Position in an Infinite Scrolling Grid
abof’s product listing page is an infinitely scrolling grid of product images that loads 12 items per “page”. When a user visits a listing URL, our CDN responds with a pre-rendered HTML page with an initial set of 12 products already loaded. Another 12 products are loaded asynchronously the moment our JavaScript bundle loads and React takes over the page. From this point on, a new set of products is loaded whenever the user scrolls to the last loaded page.
A recent post on the Google Developers blog talks about the challenges inherent in implementing an efficient infinite scrolling list in the browser. The post is recommended reading, and I won’t repeat the information it already covers in this case study. Instead, I’ll talk about how we use a URL to keep track of the user’s position within our infinite scrolling grid without slowing the browser down.
Two common causes of jank on pages that use infinite scrolling are:
- Event listeners on the document’s scroll event.
- Repeatedly querying the DOM from those event listeners.
With a little bit of work, we can avoid listening on the scroll event altogether, as well as keep DOM queries to a minimum.
Listing URLs on abof look something like this:
https://abof.com/women/clothing/dresses?page=xxx
That page=xxx
bit at the end keeps track of the user’s position within the grid, and changes as she scrolls from page to page.
Every 12th product in the grid has a data-page-end
property attached to its DOM representation that indicates that the product appears at the end of a certain page. For example, the product card at the end of the 4th page (i.e, the 48th product in the grid) is marked up as follows:
<div itemscope="" itemtype="http://schema.org/Product" class="product-card product-card--data-marker" data-page-end="4" data-product-id="205675">
<!-- product details here -->
</div>
We call these elements page markers, and we keep track of them in an array called activePageMarkers
inside our ProductGridContainer
component. Whenever a new set of products is loaded, any page markers inside that set are appended to this array.
These page markers are references to actual DOM nodes within the document. Along with these references, we also keep track of their positions on the page, as well as their dimensions. This way, we don’t have to query the DOM for this information repeatedly as the user interacts with the page. We only recalculate it when the user triggers an event that is likely to invalidate our existing data (e.g, resizing the page or rotating the device).
Finally, we use requestIdleCallback()
to fire a function called syncPageLocation()
whenever the browser is idle, throttle it so it fires at most once every 500ms, and make sure it doesn’t fire if the user hasn’t scrolled the page for a while.
syncPageLocation()
uses the browser’s scroll offset and the position data stored in activePageMarkers
to find the page marker closest to the bottom of the page. It then extracts the value of data-page-end
from that element, and uses history.replaceState()
to change the page=xxx
bit in the URL to reflect the value stored in data-page-end
.
This machinery allows the user to share the URL of the listing page over IM, email, or social media with the confidence that anybody who follows it will see the same set of products that were on her screen a moment ago. Moreover, it allows users to move back and forward between product detail pages and listing pages without losing her position in the grid.
Analytics with Google Tag Manager and Redux Middleware
Analytics on abof.com are powered by Google Tag Manager hooked up to a number of third-party analytics providers.
On each page, the analytics team at abof wanted to capture a number of custom events tied to specific user interactions. We wanted to do this in a way that none of our components had to be made aware of analytics or GTM.
We started out by making a list of all the custom events that the analytics team wanted to capture. For example, they wanted to capture a bunch of data about the current page whenever the user changed its sort option from the default value of “Popularity” to one of the other available options (“Just In”, “Discount—High to Low”, “Price—Low to High”, “Price—High to Low”).
Then, we mapped each interaction to one or more of our React components. The components mapped to each user interaction would emit a Redux action containing all the data we needed to capture about that interaction. For example, the SortDropdown
component would emit an action called SORT_OPTION_CHANGED
every time the user changed the sort option on a page. This action looked something like this:
{
name: 'SORT_OPTION_CHANGED',
payload: {
from: 1,
to: 4
}
}
In the payload
object, the from
field kept track of the sort option before the user changed it, and the to
field kept track of the new sort option.
Of course, our components were not aware of all the data required by a analytics event. For example, the SortDropdown
didn’t know whether the user was logged in, what her IP address was, or even the current page URL. We didn’t want our components to be analytics-aware, so we only had them capture the data that they actually had access to. We filled in the missing bits using a Redux middleware called gtm
.
The gtm
middleware looked at each Redux action that we were interested in, created one (or, in some cases, more than one) analytics events for each action, filled in any missing information that the events required, and pushed them into GTM’s dataLayer
array.
This architecture allowed our components to be oblivious of GTM while still allowing analytics data to be collected at a very granular level.
Caching Pre-Rendered Pages for Logged-in Users
Once our universal React app renders a product listing page on the server, abof’s CDN caches it for 10 minutes. This not only shaves a few hundred milliseconds off our load time, but also helps keep abof’s server bills down.
This optimization is straightforward to apply to requests that come from customers who are not logged into their abof accounts. Any given listing page will look identical to all of these anonymous users, which means we can serve them whatever the CDN has cached.
However, we can’t blindly serve a cached page from the CDN to a user who is logged into abof. A customer who is logged in sees a few extra bits of information on each listing page:
- Her username, with a link to her profile, on the top right corner of the page (on mobile, this appears in the hamburger menu).
- A dropdown listing all the items she’s added to her cart.
- If she’s added an item to her favorites, the tiny heart icon on the top right of each product image is filled-in.
Since this information varies from user to user, caching the page is not an option for logged-in users. On the other hand, letting our universal application deal with every request that comes from a logged-in user means it now has to handle a load it was never designed for.
We work around this problem by serving the same cached listing pages from our CDN to every single user—logged in or not—and having JavaScript fill in the missing information after page load.
This is what a typical page load looks like:
- User makes a request to a listing page.
- The CDN serves up a static HTML page that doesn’t contain any user-specific information (i.e, no cart, no favorites, no username). At this point the user can start interacting with the page.
- Our JavaScript bundle loads, and React takes over.
- Our root component makes a request to a REST endpoint that returns user information.
- If the endpoint returns valid information, our app knows the user is logged in. At this point, it makes requests for cart items, favorites, and whichever other bits of information are required to customize the page for this specific user.
- If the endpoint doesn’t return valid information, our app knows the user is not logged in. It doesn’t need to do anything special to handle this case.
This architecture is not perfect. On slower connections, the user sees page elements move around and change as we fetch the extra information needed to assemble the page. However, it lets us eke out that last bit of performance from an already fast webpage.
Final Words
In this final part of our case study, I talked about three specific problems we faced while rebuilding the listing pages on abof.com:
- Keeping track of page URLs as users scroll through abof’s infinite scrolling grid of products.
- Using Google Tag Manager and Redux middleware to collect granular analytics without impacting page performance.
- How caching works in an application that uses universal rendering.
In case you missed the first two parts, you can read them here: part 1, and part 2.
E-Commerce Case Study: Building Faster Listing Pages on abof.com (Part 1)
This case study was first published on the Alaris Prime blog on June 8, 2016. You can read the original case study here.
abof.com (pronounced ae-boff dot com) is an online fashion store that’s part of Aditya Birla Group’s e-commerce strategy. Earlier this year, the company brought in the Alaris Prime team along with Ciju from ActiveSphere Technologies for a complete rewrite of the product listing page on abof.com. After we delivered the rewrite, the load times for the page on 3G connections went from ~20 seconds to ~7 seconds, and bounce rates decreased by over 40%. These improvements have encouraged abof to invest a significant chunk of their technology resources into web performance, in particular the React and Redux ecosystem.
In this three-part case study, I will talk about abof’s motivations for the rewrite, the technology choices our team made to meet abof’s business goals, our rationale for the choices we made, and our experiences with React, Redux, and the ecosystem that has emerged around these libraries.
Motivation
Most Indians access the web using mobile phones, and this fact is reflected in the analytics data collected on abof.com: at the time we started working with abof, more than 60% of the website’s traffic came from mobile users.
While the legacy version of the website adequately served the needs of desktop users, it had three issues on mobile:
- For first-time visitors on 3G connections, a first paint of the product listing page could take over 20 seconds. Other pages on the website had similar first paint characteristics.
- JavaScript performance on most pages was poor, even on high-end mobile devices.
- Since the website was initially built for desktop and only later adapted to smaller screens, the mobile user experience was sub-optimal.
These performance issues were the root cause of low conversion rates on mobile, with as many as 50% of new visitors dropping off after the first page load.
In line with industry practices, abof’s mobile strategy was to provide users with a minimally useful experience on the mobile web while pushing them to install the company’s native Android/iOS apps, which would unlock the full shopping experience. However, relying purely on native apps to drive mobile sales has been a losing proposition in the developing world for a while now for a number of reasons:
- Retention rates for native apps have been historically poor. Unless an application serves a very specific need, chances are users will uninstall it within about a week of trying it out.
- The on-boarding flow for a native application has a huge amount of friction: the user has to visit a website, click through to the application’s page on the App Store/Google Play, and wait for the application to download and install before she can use it. Unless there’s a compelling reason for her to install an application, the user will not jump through the hoops.
- App fatigue has set in. Nobody wants to install yet another app.
- Users are wary about installing new applications because, more often than not, they slow down their devices, take up memory, and drain battery life. This is especially true in countries like India where most handsets on the market are vastly underpowered.
- Users are sick of notification spam.
Industry heavyweights Flipkart and Myntra—among others—have tried an app-only strategy, only to re-launch their mobile websites, allegedly in the face of dropping conversion rates.
Meanwhile, the introduction of new browser APIs—most notably the ServiceWorker API—has enabled mobile webapps to provide the same level of engagement to users as their native counterparts. The webapps of today are performant, run offline, can be launched from the users’ home screens in a chromeless window, and can engage users using push notifications even after they’ve navigated away.
The product team at abof has always understood the importance of a good user experience across all platforms, including the mobile web. However, these recent shifts in the mobile landscape in India pushed abof to not just bring the mobile web UX on abof.com up to par with native apps, but to make the web the centerpiece of their mobile strategy. Enter Alaris Prime.
Goals
The analytics team at abof identified three areas of the website where the largest percentage of mobile users would drop off: the product listing page, the login/signup page, and the checkout process.
Out of these three areas, the primary point of entry into the website for first-time users is the product listing page. This is where the sales funnel starts. It made sense, then, to begin with a rewrite of the listing page and immediately put it into production in order to measure the impact of improved performance on conversion rates.
Our primary goals while building the new listing page were:
- Minimize time to first paint on 3G connections.
- Improve JavaScript performance across all device classes.
- Improve user interactions on smaller screens.
To achieve these goals, we followed a few tried-and-tested guidelines:
- Keep the payload size and number of resources delivered to the browser as small as possible.
- Minimize network calls made from the client.
- Use third-party libraries only if absolutely necessary. When using a library, make sure its performance impact is well understood.
- Test performance on real devices. We usually kept a stack of cheap Android and Windows Phone handsets on our desks while developing, and we’d take some time out daily to test our new code on each of them.
- To ensure the page shows up as quickly as possible, pre-render the initial HTML on the server and deliver it to the user.
- Lazy load resources whenever possible.
Technology Stack
At the time we started our engagement with abof, all of us were Angular developers. We had put numerous Angular 1.x applications of varying complexity into production, and we had been evaluating Angular 2 for new projects. While we loved the direction Angular 2 was headed in, the framework had a few issues that rendered it impractical for our purposes:
- A Hello World application written with Angular 2 weighed in at over 150 kilobytes. This was completely unacceptable for an application that had to be delivered over flaky 2G and 3G networks.
- The Angular team had promised support for server-side (or isomorphic) rendering of webapps, but it was not clear when it would land.
- The tooling around the framework was not very mature.
Besides Angular 2, we also evaluated Vue.js and Riot.js. Both of these are powerful libraries with tiny footprints, great performance, and support for isomorphic rendering, but the communities around them aren’t as large as the ones around React and Angular. Basing our rewrite on top of one of these libraries would have made maintenance and hiring harder for abof.
In the end, we settled on a battle-tested technology stack built around React and Redux:
- We used React as our view library. It’s small (about 56 kilobytes with everything included), easy to learn, blazing fast, well-supported by a wonderful community, and has great tooling built around it. Its performance characteristics on different devices and browsers are well understood and it has great support for isomorphic rendering.
- We used Redux to manage application state. It brings together a small number of composable ideas to elegantly tackle a hairy problem.
- Webpack was the workhorse that had the primary function of slurping up our ES6 code, turning it into ES5 code, analyzing and digesting it, and spitting out an optimized 150 kilobyte JavaScript bundle that we could deliver to our users. Besides that , it helped us generate separate builds for the browser and node from the same codebase, combine and inline our SVG icons, modify aspects of the build depending on environment variables, and a variety of other niceties that I’ll discuss in upcoming articles.
- Gulp helped us automate pretty much everything except ordering chai (soon!).
- Koa rendered our application on the server-side.
- CSS frameworks are convenient for quick prototypes, but they do more harm than good on production projects. All our styles were written by hand using SASS, with some help from Bourbon.
Performance Comparisons
What follows is a visual comparison of the legacy abof.com product listing page with the rewrite our team delivered on a 3G connection:
That’s over twice as fast!
Final Words
In the next two parts of this case study I will go into more details about our experiences with React and Redux, our thoughts on the current state of front-end tooling, the methodologies we used for gauging and improving both perceived and actual performance of the new abof.com, and how we used SASS and PostCSS to successfully prevent CSS-induced hair loss through the course of the project.
Updates
Seven Languages in Seven Weeks, Week 2: Io
Designed by Steve Dekorte, Io is a small, embeddable programming language that borrows its prototype-based object model from Self, its purely object-oriented nature from Smalltalk, and its homoiconicity from Lisp (although, unlike Lisp, it doesn’t use s-expressions to represent programs). The language is such a mind-expanding experience that I have now spent way more than a week playing with it.
Syntax
Io’s syntax takes only a few minutes to learn. In short, everything in Io is a message that is passed to a receiver:
Io> receiver message
A message can accept arguments:
Io> receiver message(param1, param2, ...)
And finally, a message without a receiver is sent to the top-level object called Object
:
Io> writeln("this message is sent to Object")
That’s it. Any other syntax you see is sugar that gets translated into this simple form.
The receiver can choose whether it wants to evaluate a message or not, which allows you to do so selectively in order to implement domain-specific languages. For example, Io has an if
conditional like any other language:
Io> if (a > 10, "more than 10", a = a + 10)
A simple re-implementation of if
would look something like this:
Io> myIf := method(
call evalArgAt(0) ifTrue(call evalArgAt(1)) ifFalse(call evalArgAt(2))
)
And here’s how you’d use it:
Io> a := 10
Io> myIf(a == 10, "a is 10" println, "a is not 10" println)
a is 10
Io> a = 11
Io> myIf(a == 10, "a is 10" println, "a is not 10" println)
a is not 10
Prototypes
Io has a prototype-based object system, which it borrows from Self. After learning how Io deals with objects, I started to investigate JavaScript’s object model in greater depth. As a result, I walked away with a much better understanding of OOP in JavaScript.
In a language with a prototype-based object system, new objects are created using existing objects as templates. For example, in the next block of code, Animal
is a clone of the top-level Object
. It contains all the slots (or properties) of Object
.
Io> Animal := Object clone
We can use the :=
operator to add a new slot to Animal:
Io> Animal talk := method(writeln("This animal can't talk."))
Cat is a clone of Animal. It gets all the slots of Object
, as well as the talk
slot defined on Animal
.
Io> Cat := Animal clone
Io> meep := Cat clone
Io> meep talk
This animal can't talk.
However, it can have its own talk
slot too.
Io> Cat talk := method(writeln("Meow!"))
Io> meep := Cat clone
Io> meep talk
Meow!
Likewise, Cow
is a clone of Animal
, but it doesn’t have its own talk
slot. It always uses the talk slot from Animal
.
Io> Cow := Animal clone
Io> daisy := Cow clone
Io> daisy talk
This animal can't talk.
The equivalent code in JavaScript is:
// In a file called animals.js
'use strict';
function Animal() { }
Animal.prototype.talk = function() {
console.log("This animal can't talk.");
};
function Cat() {
Animal.call(this);
}
Cat.prototype = Object.create(Animal.prototype);
Cat.prototype.constructor = Cat;
Cat.prototype.talk = function() {
console.log("Meow!");
}
function Cow() {
Animal.call(this);
}
Cow.prototype = Object.create(Animal.prototype);
Cow.prototype.constructor = Cow;
const meep = new Cat();
meep.talk(); // Prints "Meow!"
const daisy = new Cow();
daisy.talk(); // Prints "This animal can't talk."
Domain-specific Languages
Like Ruby, Io lets you build powerful DSLs. However, Io’s DSLs are far more powerful on account of its homoiconicity, and they can go as far as changing the very syntax of the language. In this regard, Io is similar to Lisp and its descendants.
Here’s an example straight from Steve’s book. Creating and using a map (a collection of key-value pairs) in Io looks something like this:
Io> map := Map clone
Io> map atPut("foo", "bar")
Io> map atPut("baz", "quux")
Io> map at("foo")
==> bar
Let’s add JavaScript-esque object literal syntax to the language, which will enable you type the following into the Io interpreter and get a built-in Map
object:
{
"foo": "bar",
"baz": "quux"
}
First, we add a new assignment operator, represented by the colon (:), to Io’s operator table:
Io> OperatorTable addAssignOperator(":", "atPutValue")
Now whenever Io encounters a colon, it will translate it to the message atPutValue
, with the item on the left of the colon as the first argument, and the item on the right as the second argument. So, the following code:
Io> "foo": "bar"
Is translated to:
Io> atPutValue("\"foo\"", "\"bar\"")
Notice the extra quotes around “foo” and “bar”. This is because Io treats all values passed to the assignment operator as strings.
Next, we define a new slot called atPutValue
on the built-in Map
:
Io> Map atPutValue := method(
self atPut(
call evalArgAt(0) asMutable removePrefix("\"") removeSuffix("\""),
call evalArgAt(1) asMutable removePrefix("\"") removeSuffix("\"")
)
)
This method removes the extra quotes from around its arguments, and passes them on to the built-in atPut
method defined on Map
.
Finally, we define a new slot called curlyBracket
on the top level Object
. Io will call the method stored in this slot every time it encounters a pair of curly brackets.
Io> curlyBrackets := method()
Inside this method, we create a new Map
:
Io> curlyBrackets := method(
m := Map clone
)
Next, we take each argument passed to curlyBrackets
and send it to our new Map
for evaluation. In the end, we return the Map
:
Io> curlyBrackets := method(
m := Map clone
call message arguments foreach (arg,
m doMessage(arg)
)
m
)
Now the following syntax will produce a new Map
:
Io> { "foo": "bar", "baz": "quux" }
First, Io parses each key-value pair inside the curly braces. Since we’ve defined ”:” to be an assignment operator that is equivalent to the message atPutValue
, each key-value pair gets parsed into that message.
Next, all items within the curly braces are collected into a list
and passed to the curlyBrackets
method on Object
. In the end, the JavaScript-esque syntax above gets parsed into this method call:
Io> curlyBraces(
list(
atPutValue("\"foo\"", "\"bar\""),
atPutValue("\"baz\"", "\"quux\"")
)
)
Finally, our definition of curlyBraces
creates and returns a new dictionary for us.
Closing Thoughts
While the Io website has a tutorial, guide, and language reference, it’s hard to find any additional information about the language on the Web. There seems to be very little activity on the GitHub repository, mailing list, or the subreddit. All the Io-related blog posts I found study were notes written by people working their way through Bruce Tate’s book.
For all practical purposes, Io is abandonware.
Regardless of its current status, Io’s simplicity, elegance, and extensibility puts it in the same league as Lisp and Smalltalk. Even if you don’t end up using the language in a project, learning it will make you a better programmer.
I plan to come back to Io in the future, when I have some more time to tinker with language implementations. For now, it’s on to the next language!
Seven Languages in Seven Weeks, Week 1: Ruby
In an attempt to get back into programming language theory and implementation—a hobby I’ve neglected since I started working full-time—I recently started reading Bruce Tate’s Seven Languages in Seven Weeks. These are my notes and observations from my first week of study.
In week 1, Bruce introduces Ruby, drawing attention to its dynamic nature, expressive syntax, and metaprogramming capabilities. Together, these properties make it a suitable language for building natural, English-like APIs.
I didn’t think I could learn anything from Ruby that years of writing Python and JavaScript hadn’t already taught me. However, after three days of studying the language, I was pleasantly surprised to be proven wrong. There’s a lot to learn from Ruby about designing languages for humans first and machines second.
Syntax
Most of my experience with dynamic languages has been with Python and JavaScript, both of which are conservative with syntax sugar. This is a design decision I have always appreciated but, after acquainting myself with some Ruby libraries, I feel there’s an argument to be made in favor of liberally adding syntax sugar to make expressing common programming idioms more convenient.
The downside of all the sugar is that Ruby’s syntax comes with many surprises. For example, I can define a method that accepts a block as its final argument like so:
def myFun n, &block
# do something nice
end
However, I can’t define a method that accepts two blocks as arguments using the same syntax, since the ampersand is a special bit of syntax reserved for defining methods that accept a block as their final argument. So, this is an error:
def myFun &block1, &block2
# do something nice
end
There are a number of ways of calling a method, which can seem overwhelming at first. For example, consider the following method:
def myFunction param, &block
# do something nice
end
Both these ways of calling myFunction
are correct:
myFunction 1, { |n| puts n }
myFunction(1) { |n| puts n }
But the following is a syntax error:
myFunction(1, { |n| puts n })
If, however, we change the definition of myFunction
so that it does not accept a block as its second argument:
def myFunction param1, param2
# do something nice
end
Then, it can be called in the following ways:
myFunction 1, 2
myFunction(1, 2)
But the following is a syntax error:
myFunction(1) 2
Things can get quite murky when we start talking about defining and calling methods, especially when we throw the ampersand operator into the mix. You win some you lose some.
REPL
For certain classes of programs, I like to use a REPL for interactive development. Ruby comes with irb
, which is passable but not great. It helps while debugging and exploring language concepts, but it’s not powerful enough to let you interactively build and test programs.
I’d like to shout out ipython here, which is the best non-Lisp REPL I’ve used in my career.
Introspection
Both Python and JavaScript have great introspection capabilities, but they never feel as natural and as they do in Ruby.
For example, I wanted to check if there was a way to convert a Ruby hash into a bunch of key-value pairs. I knew that, in Ruby, the names of all methods that convert one data type into another conventionally have the prefix “to_”. Armed with this knowledge, I only had to write this bit of code to list all methods on a hash that converted it into a different object:
{ :foo => :bar }.methods.select { |m| m.to_s.start_with? "to_" }
Just for comparison, the equivalent in JavaScript would be:
const isMethod = obj => typeof obj === 'function';
const listAllMethods = obj => {
const ownMethods = Object.getOwnPropertyNames(obj).filter(p => isMethod(obj[p]));
const proto = Object.getPrototypeOf(obj);
if (proto) {
return [].concat(ownMethods, listAllMethods(proto));
} else {
return ownMethods;
}
};
const s = {
foo: 'bar'
};
listAllMethods(s).filter(m => m.startsWith('to'));
Metaprogramming
Metaprogramming is Ruby’s strong suit, and there isn’t much I can say about it that hasn’t already been said. Rails’ ActiveRecord is perhaps the best example of how metaprogramming can help build clean, natural-looking APIs.
My opinion of metaprogramming has always been that it’s great for people building libraries and frameworks, but not so great for people building applications. Too much magic can make code unpredictable and hard to debug.
However, I haven’t spent enough time with Ruby to know for sure how the liberal use of metaprogramming affects code clarity and maintainability in a large codebase. Most programming languages make it circuitous to do any kind of metaprogramming, and the APIs are usually an afterthought. In Ruby, the metaprogramming APIs are so well-designed and natural that—used in moderation—metaprogramming might actually enhance code clarity without any negative affect on maintainability.
As an example of the power of metaprogramming, Bruce presents a class that returns the Arabic equivalent of a Roman numeral whenever it’s accessed as a static property. I.e,
irb(main):001:0> Roman.X
=> 10
irb(main):002:0> Roman.XC
=> 90
irb(main):003:0> Roman.XII
=> 12
In Ruby, the Roman class looks something like this:
class Roman
def self.method_missing name, *args
roman = name.to_s
roman.gsub!("IV", "IIII")
roman.gsub!("IX", "VIIII")
roman.gsub!("XL", "XXXX")
roman.gsub!("XC", "LXXXX")
(roman.count("I") +
roman.count("V") * 5 +
roman.count("X") * 10 +
roman.count("L") * 50 +
roman.count("C") * 100)
end
end
I attempted to translate this into JavaScript using ES6 Proxies:
'use strict';
String.prototype.count = function(rx) {
return (this.match(new RegExp(rx, 'g')) || []).length;
};
const Roman = new Proxy({}, {
get: (_, property) => {
let roman = property.slice();
roman = roman.replace(/IV/g, 'IIII');
roman = roman.replace(/IX/g, 'VIIII');
roman = roman.replace(/XL/g, 'XXXX');
roman = roman.replace(/XC/g, 'LXXXX');
return (
roman.count('I') +
roman.count('V') * 5 +
roman.count('X') * 10 +
roman.count('L') * 50 +
roman.count('C') * 100
);
}
});
Not too bad, huh?
WordPress is Maximum Cool
If you dig into my post history on this blog, you’ll find I’ve written a lot about blogging platforms.
When I started writing a blog, back in the day before memes and Snapchat, I got myself an account on WordPress.com because that’s what you did in those times. Well, okay, you could also set up shop on Blogger or LiveJournal, but I was one of those people who wanted all the software to be GPL and all the content to be CC-BY-SA. My obsession with open source and open culture, coupled with my belief that nu-metal was a valid art form, ensured that I didn’t have too many friends growing up.
By the time I got to high-school, I realized that software wasn’t magic and, if you were smart, you could build your own. When I graduated from blogging about high-school drama to blogging about open-source mailing-list drama, I built myself a little blogging tool using Python and Django. It was a great learning experience, but a few months into it I realized that maintaining your own blogging software is as boring as sitting through a J Cole album. Frustrated, I moved all my blog posts to a self-hosted WordPress install. You live and learn, right?
Just as I was starting to enjoy actually writing an actual blog that actual real people actually read, the Internet told me I was a schmuck for using WordPress. WordPress was built with PHP, and PHP was for uncool dads who wear New Balance and cargo shorts. If I wanted to be cool, I had to use something called Jekyll, which was written in Ruby. Writing Ruby makes you literally Miles Davis, or so I was told. I wanted to be Miles Davis, so I moved all my old posts from WordPress to Jekyll. I even wrote a custom theme for my blog, and made it responsive because some guy named Steve Jobs put a web browser in a phone and suddenly 1280x800 wasn’t the only game in town. Steve made many contributions to humanity, but even he couldn’t make New Balance cool.
After I got the hang of Jekyll, things started looking up for me. I lost a lot of weight, fell in love, and learned how to properly iron my shirts. A designer friend told me she liked the colors on my blog. Macklemore admitted Kendrick got robbed. I wrote quite a bit and life was perfect, but then Medium came along and everyone I met on the street was like, “Bro have you checked out Medium yet?”
I went home and checked out Medium, and discovered it was a cross between a GIF gallery, an emoji keyboard, and a stock photo website. The combination was compelling enough, and I immediately got myself an account. Right about this time, having to rely on a bunch of build tooling to post to my Jekyll blog was starting to frustrate me. It meant that I couldn’t publish my posts from anything but my work computer.
One Friday night I drank too much whiskey by myself and migrated all my Jekyll posts to Medium. Jekyll was still cool, but I’d been told that Medium was cooler and I’ve always strived to be maximum cool.
Medium was great for writing, and even better for getting more eyeballs on my posts, but within a few months I started to notice a decline in the number of people reaching out to me after reading something on my blog. When I published a useful post on my self-hosted WordPress or Jekyll blog, people usually stuck around for long enough to click on my about page. From there, they ended up contacting me either with the intention of hiring me, or just to thank me for something I’d written. This behavior was reflected in my analytics data.
The reason for the decline in communication after I moved to Medium was that the platform doesn’t give you space to talk about yourself. You can enter a short bio on your profile page, and a description for your publication if you create one, but there really is no way on Medium to maintain a regularly updated about page, or a page listing your public talks, or one listing your work. All of this content has to be hosted on an external service, at which point there’s little reason to use Medium in the first place, at least in my opinion. I want a single tool that I can use to centralize my online presence, and unfortunately Medium is not it. This is not Medium’s fault. The platform is just designed for a different use case.
In the six months I spent writing exclusively on Medium, nobody reached out to me over email. I got quite a few comments on my posts, but the conversations never went beyond technical discussions.
WordPress is maximum cool because it gives me total control over my online identity. Even a managed blog on WordPress.com is leagues ahead of anything Medium has to offer in terms of customization and having a corner of the Web to myself, to do with as I please.
They may not sound like groundbreaking features, but the ability to change some CSS, add some text or a few links to a sidebar, or create a few pages on your blog talking about yourself and your work goes a long way when it comes to having an identity of your own on the Web.
I’ve considered trying out other self-hosted blogging software—Ghost being the one that excites me most—but the theme and plugin ecosystem around WordPress is so large that everything I want to do with the software is usually a Google search away.
In summary:
-
If you want to write a blog, don’t start by writing your own blogging software.
-
Jekyll is great, but I like being able to write from my iPhone and home computer.
-
Medium is too limiting in terms of how I portray myself on the Web.
-
Ghost doesn’t have the kind of community and plugin ecosystem that WordPress has.
-
Therefore, WordPress is maximum cool.
From my perspective, WordPress is a solved problem. PHP is fast enough, hosting is cheap, there are plugins for everything, customization is a cinch, and Santa Claus is real. After I’ve set everything up as I like it, I don’t really have to think about the software anymore and I can focus on writing.
I’ve moved all my writing back to a self-hosted WordPress and I intend to keep it here.
create-react-app and the Pit of Success
On May 18, the create-react-app
team announced the release of v1.0 of the project. As always, a bunch of new features made it into the release, notable ones being a new version of Webpack, support for turning your app into a PWA using the ServiceWorker
API, and support for bundle splitting using dynamic import()
s.
If you haven’t used create-react-app
before, this is how the project describes itself:
Create React apps with no build configuration.
And this is how it works:
-
Install
create-react-app
from NPM. -
Run
create-react-app your-project-name
in your terminal -
The tool sets up a full-featured build system for you, powered by Babel, Webpack (with a selection of useful plugins), Jest, Flow, ESLint, Autoprefixer, and several other commonly used frontend build tools.
-
Start hacking!
This takes the boring grunt work out of frontend development, makes React more approachable for new developers, and helps the community standardize around a set of reliable, proven build tools. Those are the obvious benefits of a tool like this.
However, create-react-app
has a less obvious benefit as well: it pushes developers into the pit of success by encouraging good programming practices and making it easy to do the right thing.
-
It ships with ESLint set up with a solid set of linting rules. Every time you violate good JavaScript practices, it displays a prominent warning on the command line as well as your browser console.
-
It ships with Jest set up and ready to go, so you have no excuse for not writing tests for your components.
-
It ships with support for Flow, so you have no excuse for not adding type annotations to your codebase.
-
It ships with support for bundle splitting, so you have no excuse for shipping giant 3MB bundles to your users.
-
It supports making your app offline-first using the
ServiceWorker
API.
While working on a project, doing the right thing often means overcoming inertia. If you’re anything like me, you want to spend as much time as you can working on the meat of your application, on the parts that make it unique. Spending half an afternoon setting up Flow, or yet another test runner, or yet another linter feels like extraneous grunt work that doesn’t move the project forward in a measurable way.
By shipping a selection of code quality tools configured and set up right out of the box, create-react-app
takes the inertia out of doing the right thing. And it doesn’t stop there! Modern frontend development is an exercise in choosing between libraries and tools that seem to do similar things, to the point where developers often suffer from analysis paralysis. This tool makes many of these choices for you, eliminating that cognitive burden and freeing you to concentrate on what matters most: your business logic.
That said, while create-react-app
makes it easy to do the right thing, it doesn’t make it particularly hard to do the wrong thing. There’s nothing stopping developers from bloating their bundle sizes by pulling in tens of third-party modules from NPM, or from serving render-blocking JS/CSS, or from creating jank by attaching expensive event handlers to scroll events.
I’m not sure how these malpractices can be discouraged. Perhaps integration with WebPageTest or Google’s Lighthouse would help? Or maybe the build script could warn you when your bundle size exceeds a certain limit? Perhaps these problems should be tackled elsewhere in the stack?
Regardless of whether create-react-app
chooses to tackle these problems or not, the tool as it stands now makes shipping quality React code painless and fun, and has absolutely changed the way I work with React.
If you haven’t tried it yet, you can check it out here.
Migrating from Jekyll to Medium
I recently migrated my self-hosted Jekyll blog to Medium. I have no specific reason for choosing Medium, except that it’s in vogue in the communities I follow. I don’t have strong opinions about blogging platforms.
What follows is a quick account of how I made the transition.
Step 0: Set Up a Medium Publication
This step is self-explanatory, but I’m explicitly listing it because it’s necessary to have a Medium publication if you want to use a custom domain for your blog.
I set up a new custom domain (blog.ankursethi.in) for my Medium publication. My old blog (ankursethi.in) currently redirects to Medium, but in the future I plan on using it to showcase my work as a front-end developer.
Step 1: Migrate Your Posts
Currently, Medium only supports importing data from WordPress, but you can use jekyll2medium to get Jekyll to spit out a WordPress export file.
I lost quite a bit of formatting information during the export process, mostly in code blocks, but I only had a few posts with significant amounts of code so fixing them manually wasn’t a big deal.
Step 2: Set Up Redirects
Cool URIs don’t change. It’s frustrating to bookmark a page and, months later, have a 404 shoved in your face when you try to access it again.
Having my Medium blog on a different sub-domain from my old blog means it’s easy to set up redirects for all my old content. A tiny DigitalOcean droplet running Nginx listens to requests on ankursethi.in and responds with a 301 to requests that try to access my old content.
My rewrite rules look something like this:
rewrite ^/?$ https://blog.ankursethi.in last;
rewrite ^/2014/01/book-review-the-essence-of-camphor-by-naiyer-masud/?$ https://blog.ankursethi.in/book-review-the-essence-of-camphor-by-naiyer-masud-230346b579e9 last;
rewrite ^/2014/01/my-reading-list-for-2014/?$ https://blog.ankursethi.in/my-reading-list-for-2014-604b10d1a74a last;
rewrite ^/2014/01/2013-year-in-review/?$ https://blog.ankursethi.in/2013-year-in-review-893e995816ca last;
rewrite ^/2013/07/loading-spinners-with-angularjs-and-spin-js/?$ https://blog.ankursethi.in/loading-spinners-with-angularjs-and-spin-js-dc1e4a57df8a last;
rewrite ^/2013/07/simulating-a-slow-internet-connection/?$ https://blog.ankursethi.in/simulating-a-slow-internet-connection-f6c883b4e0a6 last;
rewrite ^/2013/05/an-introduction-to-cmake/?$ https://blog.ankursethi.in/an-introduction-to-cmake-43b4f08ac453 last;
rewrite ^/2013/04/all-about-iteration/?$ https://blog.ankursethi.in/all-about-iteration-40aed6712632 last;
rewrite ^/2013/04/tastypie-and-timezones/?$ https://blog.ankursethi.in/tastypie-and-timezones-a682ac883302 last;
rewrite ^/2013/03/travel-light/?$ https://blog.ankursethi.in/travel-light-888b8e22a528 last;
rewrite ^/2013/03/wordpress-under-siege/?$ https://blog.ankursethi.in/wordpress-under-siege-f10952732268 last;
rewrite ^/2013/03/okay-wordpress-you-win-this-round/?$ https://blog.ankursethi.in/okay-wordpress-you-win-this-round-434f7c4488ff last;
rewrite ^/2012/12/2012-year-in-review/?$ https://blog.ankursethi.in/2012-year-in-review-24a92b0f9550 last;
rewrite ^/2012/11/mobile-tweaks-and-chrome-extension/?$ https://blog.ankursethi.in/mobile-tweaks-and-chrome-extension-85c5d4c2af29 last;
rewrite ^/2012/11/bookmarks/?$ https://blog.ankursethi.in/bookmarks-ed70dacbbcf last;
rewrite ^/2012/11/scripting-tmux/?$ https://blog.ankursethi.in/scripting-tmux-bf4e0e9cea81 last;
rewrite ^/2012/08/a-django-admin-wishlist/?$ https://blog.ankursethi.in/a-django-admin-wishlist-9dac472e18f6 last;
rewrite ^/2012/07/cache-all-the-things/?$ https://blog.ankursethi.in/cache-all-the-things-5c7589e81afe last;
rewrite ^/2012/07/a-whole-new-can-of-beans/?$ https://blog.ankursethi.in/a-whole-new-can-of-beans-e76ddab0ebeb last;
And that’s that. All my content is now safely on Medium, I don’t lose my search rankings, and all my old URLs still work!