A Successful Way Of Building Software

We've been developing Web Apps since 2008, at first, we were basically using  Drupal, which was completely replaced by Django a couple of years later.

In this process of building Websites, we've learned so many things and have improved our process by automating tasks and not writing boilerplate code by hand, also, with the arrival of every new project we improve the things we weren't getting quite right in the previous one, the objective overall is to make projects easy to understand and minimize friction.

This is the average workflow we have when developing a new project.

1. Understanding client's story and setting estimates

We start every project by listening to our clients carefully, taking notes, giving them suggestions on how to improve things, and generally laying the overall project idea. This process may take one or multiple sessions depending on its complexity.

Afterwards, we sit down with our team and estimate how much time it would take for us to produce this software, we do this by dividing the project into smaller sections and then dividing each section into Jira issues which is the software we use for managing projects.

We have 5 main status a ticket can be at, the first is To do which just holds tickets that are pending, then, when a team member starts working on a ticket it is moved to In progress and to Quality assurance to be revised once it's finished.

After It passed QA it's ready to be reviewed by the product owner, which is a role played by one of our team members and then it's moved to Done once it's accepted.

A status we've added is Stopped which holds tickets that didn't passed QA and are to be fixed.

2. Defining entities and relationships

At this stage, we define models (Entities of the project) and how they're related. This is usually designed by our lead developer and discussed and improved with all backend developers involved in the project.We try to make everything very simple, and try not to have premature optimisation. Eventually we have to add model attributes and modify little things afterwards during the implementation.

3. Coding, development workflow and deployment

Nowadays we use Yeoman and a generator we built to scaffold the basic projects structure, preventing us from writing boilerplate code. Afterwards, we setup the project git repository with two main branches, master and dev, which represent production and development respectively.

Once that the repo is set up, other developers are ready to hack, so, each one clone the repo and create a branch with the id of the Jira ticket they want to work on.

Pull requests

We use pull requests to integrate code into stagging and production servers and try them to be very concice and small in order to be easy to review and to spot errors quickly. With smaller pull requests, we've notices a faster flow bacause the person that review them won't have to spend 20 minutes on each one but rather 3 minutes or less.

Continuous integration

As we stated before, we use a staging server which is available on the web and is a representation of the dev. Every time a pull request pointing to dev is merged, this code is deployed to the staging server using Bitbucket Hooks and Jenkins CI.

The deployment process is composed of two steps, first it runs django tests and secondly the code is deployed to the server if the tests pass. We're notified of all these steps on our HipChat company chat.

Sprints

For us iterating is a key to keep project manageable, therefore we rely on an agile development process where we have sprints. These days our sprints last two weeks as we realized that it allows us to complete functionalities that are more relevant which makes things easier for our QA team. This as opposed to the one week sprints we had before were the QA team had to test the single pieces of a certain process.

4. Design, styling and front-end tooling

After the backend development process is finished, we start styling the the Web App. This may seem complex as styling a finished template looks harder than adding functionality to a styled interface, however, we've realized that by using some simple conventions, styling a working interface becomes easier. Basically the conventions we use are.

  • Use a js- prefix to every markup element that is affected or bount to using JavaScript.
  • Keep markup as simple as possible.
  • Don't care about how the page looks when developing the first functionality.

In this way we know we can move things around when adding styles without breaking something.

CSS/SASS/Compass

By defining a project pallet we start our styles structure, these pallet is a set of variables with the colors we'll use throughout the site. As a CSS preprocessor Sass in conjunction to Compass which provides a convenient set of mixins.

Dependencies

Bower turned out to work very well for us, so we manage all front end dependencies using it. Also, we use Gulp to compile assets, create tags using Semver and work in conjunction with LiveReload.

5. QA and shipping

Before the site is ready for production, our QA team navigates it fully, finding where it breaks, so that we can fix this issues before deploying, then our client uses the site and have to approve everything is looking fine, then we move to production.

Moving to production is just a merge into the master branch.

Conclusion

Keeping things simple, automate as much as possible and learning from errors are key things to have in mind in order to make overall software development process easier.

More reading

If you want to learn more about the things we went over, here there are some relevant links.

This article was originally published in AxiaCore Blog.

Dependency Management In Django Using Bower

In today's web applications it's very common to use libraries and have several dependencies that are used in the front end of our projects. Usually we start by including jQuery and then, we keep adding more and more dependencies as we need. Every time we see more developers following best practices like assets minification and concatenation, often taking advantage of Django pipeline, which makes it easy to have a list of files minified and merged into a single one.

One of the things that isn't quite used in the wild though is versioning front end dependencies in opposition with the way we version backend libraries and frameworks which is usually very specific.

Versioning front end dependencies can save us a lot of time in the process of update, keeping track of which of them are outdated and debugging quickly as we easily spot what version of each library we have. Bower from Twitter is a popular package manager that addresses most of these cases, it helps us to install, update and delete dependencies like jQuery and Bootstrap in an easy way. On the other hand allows us to have a list of installed packages which can be updated every time we install, update or remove a library.

The Plan

In this article I want to show you how to use Bower in your Django projects, specifically we'll take a look at how to define the folders structure, how to setup Bower and get started with the basic commands. Also, we will look at how to setup  Jenkins for deployments, and fetch front end packages on build. So, without further adieu, let's get started.

Folder Structure

For the example project, we've created a simple files structure based on AxiaCore's Django Project Template. This template comes with an app folder in which we'll have all our assets. The example project can be downloaded from GitHub.

Installing NodeJS and Bower

Install Node JS, from the website, then issue this command to have bower globally installed.

$ npm install bower -g

The -g flag tells Node Package Manager (NPM) to install the package globally, so that it's available in all locations, not only in the current project.

Bower.json

bower.json is the file that describe our project, here among other information, we have a list of all the libraries we are using. Everytime we install a new library, we can use pass the option --save with the install command and the library will be added to bower.json.

{
  "name": "awesomeProject",
  "version": "0.1.0",
  "dependencies": {
    "jquery": "~2.1.1"
  }
}

Specifying packages directory with .bowerrc

Using .bowerrc we can specify the folder in which dependencies are stores, by default all packages are saved in bower_components. This is particularly helpful in Django projects, as assets are most of the times not directly stored in the root folder.

{
  "directory": "app/static/bower_components"
}

With this configuration, components will be saved under app/static/bower_components given that in this example we have all our assets in app/static, however, the location of the assets folder may vary from project to project.

Using .gitignore

One of the nice things of using Bower is that we don't have to store project dependencies in our repository because these can be installed by each developer with running bower install within the project folder. Given that, we want to have git ignoring bower_components folder.

# ...
# Ignore bower components.
bower_components
# ...

Installing new dependencies

Now that we have Bower installed, let's add a couple of libraries to our project.

$ bower install jquery --save
$ bower install bootstrap --save

These commands will fetch jQuery and Bootstrap to our bower_components folder, we are passing the --save option, in order to have these libraries listed in bower.json for future use. Having all dependencies saved in bower.json is useful as other developers can clone the project and have all dependencies with just issuing bower install instead of writing one command per package.

Other useful command is search which shows all the packages in the Bower registry that match our query.

$ bower search underscore

In order to use a package from the list, just install it with its name as seen in the search results.

bower list is another useful command that show all the packages that we have installed along with their version.

$ bower list

Including Packages In Project

Now that we have the packages, we can include them in our project.

Using Django Assets Pipeline

Django Pipeline is a nice library for concatenation and compression of assets, with it, we can list of our assets in a configuration file and call them from a template using a single tag. To use the libraries we just installed, go to settings.py and add the following lines to include Bootstrap styles and jQuery.

# CSS Files.
PIPELINE_CSS = {
    # Project libraries.
    'libraries': {
        'source_filenames': (
            'bower_components/bootstrap/dist/css/bootstrap.css',
        ),
        # Compress passed libraries and have
        # the output in`css/libs.min.css`.
        'output_filename': 'css/libs.min.css',
    }
    # ...
}
# JavaScript files.
PIPELINE_JS = {
    # Project JavaScript libraries.
    'libraries': {
        'source_filenames': (
            'bower_components/jquery/dist/jquery.js',
        ),
        # Compress all passed files into `js/libs.min.js`.
        'output_filename': 'js/libs.min.js',
    }
    # ...
}

After this setup, you can go to the template in which you want to use the libraries and include them using compressed_js and compressed_css from Django Pipeline.

<!-- compressed tags is created by Django Pipeline.  -->
{\% load staticfiles compressed %\}
<!-- Include CSS files -->
{\% compressed_css 'libraries' %\}
<!-- ... -->
<!-- Include JavaScript files -->
{\% compressed_css 'libraries' %\}

compressed_css tag paints the rel's for all the styles we specified in settings.py PIPELINE_CSS and so does compressed_js with the ones in PIPELINE_JS. Note that the string we're passing to the tag is the set of files we want to use.

When in development, compressed tags will print markup to load each file using link or script, however, in production it will compress the assets and paint one single tag to load the compressed file with the concatenated assets.

Using Bower With Jenkins

One of the things that prevents developers from using Bower is that there's not so many documentation on how to build and deploy projects that use  NPM using Jenkins, however, with a little configuration, we can have Jenkins building our projects and executing NPM commands.

Jenkins is a continuous integration server that's very used in the Django community. It allows to deploy applications, run tests, linting source code and more. We can also used it to run NPM commands and fetch our project assets from Bower registry, so that we don't have to have them in our repository.

In order to run NPM with  Jenkins we have to install Node JS Plugins, which helps us with the NodeJS setup. The preferred way of installing this plugin is via the Jenkins Plugins Interface which can be accessed by going to your Jenkins home and adding /pluginManager to the URL. Once there, in the Available tab we can search for NodeJs Plugins and install it.

Jenkins.sh configuration.

Within Jenkins build file jenkins.sh, we're going to tell it to install Bower and then the packages from bower.json.

# ...
# Keep NodeJS in the current workspace.

$ export npm_config_prefix=.npm/
$ export PATH=.npm/bin:$PATH

# Install bower.
$ npm install -g bower

# Install bower packages.
$ bower install
# ...

After adding the above lines to Jenkins build file, you can commit and push your project changes in order for it to be built, then Jenkins will take care of installing Bower and fetching needed packages.

Conclusion

I personally have found Bower to be very helpful in my development workflow, it allows me to have dependencies organized and to manage them easily, installing, uninstalling and updating them with simple commands.

I hope you can start taking advantage of it and include it in your workflow, also, don't hesitate on telling us how you're using it!.

Special thanks to Tuts Plus for their article on bower and to Pascal Hartig's Article on Jenkins integrations with NodeJs packages.

Get example code on GitHub: View on GitHub.

This article was originally published in AxiaCore Blog.

Removing Custom Post Type Slug From Permalinks In WordPress

Some days ago I needed to remove the custom post type slug from the URL in a WordPress project, so that our post types URLs would look prettier, that is, instead of looking, .com/event/my-awesome-event look .com/my-awesome-event. In our case, we decided to use post types because we wanted to have the events separated from the regulars posts so that it was easier for us to find them on the dashboard. However we wanted the permalink structure to the remain the same for both of them.

After trying different solutions, including editing the .htaccess file, I found that the best and simplest way to do it was using the technique exposed by the guys at WordPress VIP which not only allows to have URLs without post type slug, but also modifies the permalinks so that they point to the prettier URLs. This technique has 2 parts.

First, the post type is created and a filter is called on post_type_link hook, so that the permalink of the post type gets the slug removed.

/**
 * Register a custom post type but don't do anything fancy
 */
register_post_type( 
    'event', array( 'label' => 'Event', 'public' => true ) 
);
/**
 * Remove the slug from published post permalinks. 
 * Only affect our CPT though.
 */
function vipx_remove_cpt_slug( $post_link, $post, $leavename ) {
 
    if ( ! in_array( $post->post_type, array( 'event' ) ) 
        || 'publish' != $post->post_status )
        return $post_link;
 
    $post_link = str_replace( 
        '/' . $post->post_type . '/', '/', $post_link 
    );
 
    return $post_link;
}
add_filter( 'post_type_link', 'vipx_remove_cpt_slug', 10, 3 );

And then as the second part, we tell WordPress not only to show posts or pages on URLs like .com/<post-or-page-name> but also to show custom post types.

/**
 * Some hackery to have WordPress match postname to any of our public 
 * post types. All of our public post types can have /post-name/ as 
 * the slug, so they better be unique across all posts. Typically core 
 * only accounts for posts and pages where the slug is /post-name/
 */
function vipx_parse_request_tricksy( $query ) {
 
    // Only noop the main query
    if ( ! $query->is_main_query() )
        return;
 
    // Only noop our very specific rewrite rule match
    if ( 2 != count( $query->query )
        || ! isset( $query->query[ 'page' ] ) )
        return;
 
    // 'name' will be set if post permalinks are just post_name, 
    // otherwise the page rule will match
    if ( ! empty( $query->query[ 'name' ] ) )
        $query->set( 'post_type', array( 'post', 'event', 'page' ) );
}
add_action( 'pre_get_posts', 'vipx_parse_request_tricksy' );

The only caveat to this method, is that you should take care not to have a regular post and a post from a custom post type having the same URL, because WordPress would likely render the most recent one. Other than that, it can be very helpful to easily have URLs with a more simple format.

Troubleshooting

After the publication of this post, there have been several comments of users having issues with overwriting URLs. Following there are instructions on how to solve the most common errors.

1. Getting a 404 error

This happens because you haven't added your post type name in the array passed as second parameter to $query->set. In this case our post name is called event, so our array looks like the following.

array( 'post', 'event', 'page' );

However, if you named you post type differently, you should add its name to the array, so that it looks like this.

array( 'post', 'YOUR-POST-TYPE-NAME', 'page' );

The example below, shows the section in which you should pass the array.

function vipx_parse_request_tricksy( $query ) {
    
    // ... 
    
    // Add post name to the array. 
    if ( ! empty( $query->query[ 'name' ] ) )
        $query->set( 'post_type', array( 'post', 'event', 'page' ) );
}

2. Custom post type rendering in single.php rather than post-<type>.php

This problem is likely to be related with the way you define you custom post type. In this example, we define it the simplest way.

register_post_type( 
    'event', array( 'label' => 'Event', 'public' => true ) 
);

Nevertheless, when registering the post passing it more attributes, you should be careful with the rewrites. To learn more about registering post types, take a look at its documentation.

Useful links

Following, there are links to the documentation of the functions we used in this article.

Debugging WP AJAX Actions

When testing WordPress AJAX requests, usually I find myself filling forms over and over again in order to get data in the server side to build the the functionality I want to build.

Given the long time such process took to me, I tried to find an easier way to get _POST data in the server, and realized that such data could be directly set to the _POST variable, and then, I could trigger an action to execute the function or method that will handle the request.

A simple example of this method used for creating a post via AJAX without using validation, is the following.

Create a request handler

Create the function that will handle the request, this sample function is globally accessible, however you should want to have it in a controller or a simple class to handle requests.

/**
* Function that will handle post insertion with data in
* POST request.
*
* <code>;
*
*   // Sample returned response
*   array( 'status'=> 200 );
*
* </code>;
*
* Note: you should validate your data before creating a
* WordPress Post.
*/
function ajax_create_post() {

    // Set post data up.
    $data = array(
            'post_title' => $_POST[ 'post_title' ]
        ,   'post_content'  => $_POST[ 'post_content' ]
    );

    // Create a response object.
    $response = ( object ) array(
            'status' => 600 // Error
    );

    // Create a post.
    $post_id = wp_insert_post( $data );

    // Check whether the post was inserted.
    if ( is_numeric( $post_id ) ) {
        $response->status = 200;
        $response->post_id = $post_id;
    }

    // Print response.
    echo json_encode( $response );
    die();

}

Bind the function to an action

Bind the function that will handle the request to a WordPress Action so that everytime such action is triggered, the handle function will be executed.

// Bind action "wp_ajax_ajax_create_post" to "ajax_create_post" function.
add_action( 'wp_ajax_ajax_create_post', 'ajax_create_post' );

Set attributes in _POST variable

Set the data you want your handler to get when the action is triggered.

// Set the _POST attributes you want to use.
$_POST = array(
        'post_title' => 'My post title'
    ,   'post_content' => 'Sample post inserted via AJAX'
);

Trigger the action

Now that the action is bound to a function and that we've set _POST attributes, we can trigger the action to actually insert a post, and test our request handler.

// Trigger action
do_action( 'wp_ajax_ajax_create_post' );

Directly defining _POST attributes has helped me testing my code easily and I hope it is also helpful for you.

The complete example can be found in this gist. Also you can read WP AJAX documentation.

Hosting Your Site on Github Using Jekyll

Some days ago I decided to create my personal site, I was thinking on what CMS should I use for it and given that most of the times I try to adopt new technologies when I get into a new project, I decided to use something new for me this time.

image

Since I had been working a lot with WordPress, I decided that this time I would try Jekyll, which seems very popular, simple and can be hosted for free on GitHub.

The process of setting up the site was pretty straightforward, so was setting up a custom domain for it. Now, I will go over the setup a little bit.

Creating a GitHub Page

GitHub Pages are websites that have their source code hosted in a GitHub repository, they allow to use HTML, CSS, JavaScript and also have support for Jekyll which is a static publishing platform.

You have two options to setup a GitHub page, the first option, named Project Page is intended for pages that belong to a specific project, that is, for example when you created a repository for your plugin and you want it to have a page in which you can show demos, detailed documentation, pictures and the like. In this case, in order to have a page for it, you have to create in the project's repository a branch called gh-pages where you will have your index.html and which will be the root of the page.

The second option, named User/Organization page is intended for personal and organization pages, in this case, you have to create a repository named your-username.github.io and place the content for your site in the master branch of that repository. Given this, if you create Project page called let's say snacky, the URL for it will be http://your-username.github.io/snacky. On the other hand, if you create a User/organization page it will be available at http://you-username.github.io.

Setting up Jekyll

Jekyll comes with several features like: Templates, Posts, Markdown, Permalinks Structure, Categories, Tags and others, which make it easy to publish content. Also, I enjoy the fact that you can write posts from your favorite code editor and have them on GitHub after a simple commit.

Since Jekyll is a ruby gem, you have to have Ruby installed to run it. Download Ruby for your operating system and after you have it installed, install jekyll from the command line.

$ gem install jekyll
$ gem install rdiscount

After you have Jekyll installed, you can download the starting point site which is a boilerplate with Bootstrap included which will help you with the setup.

The structure of the basic site includes layouts, posts and folders to store assets.

.
|-- _config.yml
|-- _layouts
|    |-- default.html
|-- _posts
|    |-- 2013-05-16-getting-started.md
|-- _site
|-- css
|-- js
|-- index.html

The configuration file config.yml is intended to specify global configuration options for the site, like permalink structure, regeneration, timezone and the like.

# config.yml
auto: true
permalink: /:year/:month/:day/:title

After you have downloaded the basic site, you are ready to run jekyll. Note that the _site directory is the result of the build Jekyll does with the site files, so you don't have to modify it. Change directory to the folder you downloaded the files into, open the command line and start the Jekyll server by typing the following command.

$ jekyll --server

After doind so, you will get an output more or less like this:

Configuration from C:/mysite/_config.yml
Auto-regenerating enabled: C:/mysite -> C:/m
[2013-05-15 20:34:22] regeneration: 59 files changed
[2013-05-15 20:34:22] INFO  WEBrick 1.3.1
[2013-05-15 20:34:22] INFO  ruby 1.9.3 (2013-02-22) [i386-mingw32]
[2013-05-15 20:34:23] INFO  WEBrick::HTTPServer#start: pid=4444 port=4000

By default, Jekyll server runs at http://localhost:4000, so you can open the browser and take a look at your site.

After your site is running, you can play around with it, add some CSS, customizing the layouts and things like that in order to have it ready to be deployed to GitHub.

Publishing the site on GitHub

In order to deploy the site to GitHub, just create a repoitory for it and checkout the proper branch for your type of page, Project Page: gh-pages or User/Organization page: master. Then set the upstream of the repository to point to the one you created on GitHub.

User/organization setup

$ git remote add origin https://github.com/your-username/your-username.github.io.git
# Creates a remote named "origin" pointing at your GitHub repository

$ git push origin master
# Sends your commits in the "master" branch to GitHub

Project page setup

$ git remote add origin https://github.com/your-username/your-project.git
# Creates a remote named "origin" pointing at your GitHub repository

$ git push origin gh-pages
# Sends your commits in the "gh-pages" branch to GitHub

After doing this, you may have to wait up to 10 minutes to reach your site at the respective URL.

Jekyll sites don't need any special setup in order to be generated once they are on GitHub, since GitHub will do the generation process right after every commit. So, if you create a post or make lets say a layout change and commit it to GitHub, it will be on your live site automatically.

Conclusion

Now you have a basic understanding of how to setup a Jekyll site. If you want to learn more about the technologies we went over, following there are some resources that may be helpful for you.