Keeping Up With Project Changes

Published in Open Source

My ninth week of working on the open-source Marketplace and Add-ons project has now come to close. Overall, the week went quite well consisting of two new open pull requests and one solid lesson learned. Continuing along similar lines, the bugs that I tackled were mainly recommended to me by one of the main contributors on the project that I have been working closely with (OK, I’ve just been bugging him a lot on IRC).

Making Workers Retry

The first bug that I took a look at this week was related to the background workers that the Marketplace project makes use of. Most of the time in web based projects there are some time consuming pieces of code that you want to run but you don’t necessarily want to have to make the user wait until it is complete. Enter Celery. Celery brands itself as a distributed task queue but what this really means is that it allows you to queue up jobs to run at some point in the future. The Marketplace workers for a particular task are supposed to retry up to a maximum of 5 times if task fails to complete successfully. This did not appear to be the case so my job was to find out why this was.

My first step was to figure out how to actually run the Celery workers and invoke a task. I first had to install RabbitMQ, which is a messaging queue that powers the backend for Celery:

brew install rabbitmq

I was able to run the celery workers with:

python manage.py celeryd -E -B –lDEBUG

Usually in the test and development environment, the workers are set to not actually use the queue (since this would be a lot of setup). I enabled the workers to use the queue within settings_local_mkt. Next, I created a little test script that would simply enqueue an asynchronous task:


from mkt.webapps.tasks import update_manifests
update_manifests.apply_async(args=[[337144]])

I did some manual debugging and made the particular task, update_manifests, fail on purpose by raising an exception. Within my celery worker, I could see that no retry was being scheduled. After some reading on the Celery documentation, I learned that in order for a task to be retried, a particular exception must be raised in the worker. Oddly enough, the worker seemed to be catching that exception and never re-raising it! After I removed that try block, I found that the Celery task was being re-queued for retrying. I opened a pull request with my fix here. I have been asked to look further into why all of the tests have been passing. So far I am suspecting that the tests are passing since the queue is not actually being used but rather all tasks are just being run synchronously (though I’m not 100% certain this is the case).

Lesson Learned

After opening that pull request, I set my sights on a new bug to fix. In this Marketplace bug, it was stated that adding a user review to an application would not show up right away, but it would show up after the page was reloaded. It seemed like a high priority so I started to try to reproduce the bug. After starting up my development server, I got an error related to Elasticsearch (which was surely related to the work I had been doing with it on some past bugs). I had to make a slight change in my settings file CELERY_ALWAYS_EAGER = True and re-index Elasticsearch using the command:

./manage.py reindex_mkt --settings=settings_local_mkt

With the errors subsided, I found that I was able to add new reviews and delete reviews without any of the problems described in the bug description. However, I found that I was able to reproduce the bug on the Marketplace development environment. I made a note on the bug that I was unable to reproduce and within a day, I learned that someone else had fixed the bug. As it turns out, the bug was not within the main Zamboni codebase but rather it was within the front-end code, Fireplace. This turned out to be a really good lesson for me – I didn’t update Fireplace’s code when attempting to reproduce the bug. Since I work on Zamboni so often, I typically forget that a bug’s origin could also be within the front-end code.

Fixing the Theme Details Layout

After brushing off that mix-up, I switched gears again to work on another bug within the Add-ons project. In this bug, artists with very long display names were breaking out of the layout on a Theme detail page. I intended to figure out a way to truncate those names so that they would not extend beyond the layout.

After switching from the Marketplace project, I find that destroying and recreating the test database yields the best results:

make test_force_db SETTINGS=settings_local_amo

Upon running my development server, I found that I was getting an error just trying to load the theme page:

AttributeError: 'NoneType' object has no attribute '_meta'

This error was not happening on Mozilla’s development environment so I knew something must be wrong in my personal setup. I rolled back my updated code around 30 commits to just before a change that seemed to be related. Sure enough this caused the page to start loading. After a little bit of debugging, I found that if I disabled the Django Debug Toolbar, everything seemed to work. This explained why the Marketplace dev environment was working properly. The toolbar is turned off at that point so no error will occur.

Once I had that error figured out, I was able to start in on solving the bug that I was intending to all along. I modified my test user’s display name within the database to be very long to replicate the scenario. I figured I would be able to simply truncate the names but as it turns out, the names are “linkified” and link to a user’s profile page. This causes problems since the truncation must only be applied to the internal text of the link. I modified two different helper functions to accept maximum lengths, which would represent the length that should be truncated to (while also adding ellipsis). The result looked something like this:

I added a new unit test for the users_list helper to illustrate the shortening process. I’m not entirely sure what they will think of this approach but time will tell! My pull request can be found here.

I am looking forward to continuing on my bug-fixing trek next week!


Written By

Andrew Halligan

Published March 15, 2014