1. Reason for writing unit tests and how they help us in the development process
  2. The peculiarities of testing and the ways to generate data
  3. How to write unit tests?
  4. TDD, or test-driven development
  5. Mock: what’s that and why should you apply it?
  6. Conclusion

Reason for writing unit tests and how they help us in the development process

Let’s say we are in the middle of a project development with complicated logic and a number of various forms. The first sprint is behind and we’re moving towards the end of the second one. We submit some changes into the logic of a form’s functioning model and think everything is alright. However, we soon get a bug report from the QA engineers team telling that the part of the functionality we implemented during the sprint one has failed. Could we avoid this to happen?

A project is a regiment of software components, each performing one small task. A component receives some data, performs operations according to its business logic and returns result. By knowing the code behind a component we can predict the result for any incoming data. To check if the project works fine different kinds of testing techniques are used, unit testing being one of them. In this technique, a test case is written to check every component’s possible variant of behavior. It makes sure a component returns a definite result provided some certain data known beforehand. We can assume the whole project would function well If every component worked out well during this test.

When is the right time for writing tests and what are their advantages?

If you are working on a landing page you will hardly need unit tests. Still, we are totally positive about writing them anyway, at least smoke ones. These tests will let you avoid some critical mistakes in your code, though they won’t save you from invalid data. The origin of such testing type takes place from radioengineering. They used to apply electrical power to a card and wait to see if there would appear smoke, which in its turn indicated malfunctioning of the card. In our case the crux is the same: if there is an error, something doesn’t work properly. So it is always better to write tests that check your code execution.

Developers often say:

1. "Writing unit tests takes too much time";

2. "Running tests takes too much time";

3. "That’s not my job to do testing";

4. "I haven’t got a clue how the code works".

We answer:

1. Writing unit tests during the development process will save a lot of your time at the end of the development. You can discover a bug at the very moment it pops up.

2. You should write test intending them to be fast in their fulfillment. You can also adjust tests on Jenkins at push or pull request in the repository. Thus the tests are not carried out on your computer and the code with mistakes doesn’t go into the stable branch.

3. Just no comments guys :)

4. It can also happen when the project is not developed from scratch by your team. In this case it is better to spend some time and figure it out.

The peculiarities of testing and the ways to generate data

The majority of frameworks use unittest module so it doesn’t matter which one you use. For python unit tests automatization this module supports some important concepts:

  • Test Case, i.e. a testing scenario (a set of conditions, variables, system states, or modes that are tested). It is generally indivisible and can include one or more asserts. Here on a test is considered a Test Case. However, Test Case is a class derived from unittest.TestCase() in Python documentation. In this article, a Test Case is a method of the mentioned class (starting with test_), not a class.*

  • Test Suite, i.e. Test Case set within a class or a module. Test sets (Test Suite) are formed on the basis of functional or logical characteristics.

  • Test Fixture, i.e. a number of functions or means for consistent run of a testing scenario.

  • Test Runner, i.e. a component that controls tests execution and displays the result for a user. A runner can use graphic or text interface, or it can return a definite entry signaling about the results of the test run.

Every test has its set of statuses:

. - ok - test has been successfully carried out

F - FAIL - test has failed

E - ERROR - an unexpected error occurred whilst running a test

x- expected failure - you got an expected exception (exception)

u - unexpected success - you got success though expected an error

s - skipped 'msg' - text is skipped

Follow the link to get more information.

So there actually is an instrument for testing in Python. So why bother and write them? Let’s find out.

How to write unit tests?

You should always write unit tests alongside the code so that any developer engaged in the project could understand what is written there. Unit tests have a standard structure as a rule. A testing case is derived from TestCase and must be independent meaning it shouldn’t hinge on other tests. The method of every test should start with the word test_. Suppose we need to execute a set of instructions for adjusting, downloading, and subsequently deleting data. There exists a number of methods in unittest module for this:

  • setUp – method called to prepare the test fixture; it is called before every test.

  • tearDown – method called immediately after the test method has been called and the result recorded. This is called even if the test method raised an exception

  • setUpClass – a method called before tests in an individual class run.

  • tearDownClass – a method called after tests in an individual class have run.

  • setUpModule – a method called before classes in an individual module run.

  • tearDownModule – a method called after classes in an individual module run.

setUpClass and tearDownClass are to be used altogether with @classmethod, i.e. a decorator that declares a function in a class in a way that it doesn’t need access to the class where it is located. Moreover, this function can be called using (Class.f()) or its sample (Class().f()).

setUpModule and tearDownModule are implemented as separate functions in a module and they do not enter any of the class of a module.

def setUpModule():

def tearDownModule():

class MyUnitTest(unittest.TestCase):
    def setUpClass(cls):

    def setUp(self):

class MyFirstSetOfTests(MyUnitTest):
    def tearDownClass(cls):
        super(MyFirstSetOfTests, cls).tearDownClass()

    def tearDown(self):

A word on how we write unit tests for the functionality. We always start from the preparatory stage called SetUp. Then we divide it into logical parts and test them. It resembles the process of code development. For example:

1. Authorization.

def test_permissions(self):
    resp = self.client.get(self.login_url, self.valid_sign_up_data)
    self.assertEqual(resp.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)

    resp = self.client.patch(self.login_url, self.valid_sign_up_data)
    self.assertEqual(resp.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)

    resp = self.client.put(self.login_url, self.valid_sign_up_data)
    self.assertEqual(resp.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)

    resp = self.client.delete(self.login_url, self.valid_sign_up_data)
    self.assertEqual(resp.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)

2. Valid execution.

def test_success(self):
    resp =, self.valid_sign_up_data)
    self.assertEqual(resp.status_code, status.HTTP_200_OK)

    resp =, self.valid_log_in_data)
    self.assertEqual(resp.status_code, status.HTTP_200_OK)

3. Form’s errors.

def test_bad_request(self):

    data = copy.deepcopy(self.valid_sign_up_data)
    data['username'] = ''
    resp =, data)
    self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)

    data = copy.deepcopy(self.valid_sign_up_data)
    data['email'] = 'email'
    resp =, data)
    self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)

It also seems convenient to drag tests to the python module and divide them into separate files, every file being responsible for a logical part of the functionality.

To run tests we need to generate an appropriate set of data. What methods can we use for that?

1. Create testing data set in advance.

2. Generate testing data set for every test.


You can create some definite set of data, so-called fixtures. They will be loading as soon as the test run begins, and they will be processed during its execution. Still this approach has some peculiar drawbacks:

1. You can’t store a big amount of stockpiled data, as their loading takes time.

2. It is not a flexible approach for test execution.

3. If the data structure changes, you’ll have to change all the fixtures.

However, its positive side lets you use a small and invariable set of data for tests execution (for instance, the list of cities).


If your data structure is constantly changing and there’s a need to change data depending on the condition, we recommend using another option. It is of a critical significance when working with large databases. Here it seems more logical to generate testing data set in the SetUp method. For example, you can manually create an entry in the table of the database, or generate a file, or use a tool that changes fixtures for dynamic data generation

This tool is compatible with a few ORMs:

  • Django
  • MongoEngine
  • SQLAlchemy

You can generate different sets of data and strictly set parameters if needed:

class UserFactory(factory.django.DjangoModelFactory):

    class Meta:
        model = User

    def username(self, n):
        return '{0}_{1}'.format(lorem_ipsum.words(1, False), n)

    def email(self, n):
        return '{0}_{1}'.format(lorem_ipsum.words(1, False), n)

    def first_name(self, n):
        return '{0}_{1}'.format(lorem_ipsum.words(1, False), n)

    def last_name(self, n):
        return '{0}_{1}'.format(lorem_ipsum.words(1, False), n)

    def password(self):
        return make_password('qwerty')
    is_active = True

You can also use SubFactory, RelatedFactory, post_generation for generation of all correspondent connections Foreign Key, Many to Many and others.

class TaskFactory(factory.django.DjangoModelFactory):

    class Meta:
        model = Task

    def name(self):
        return lorem_ipsum.words(3, False)

    def start(self):
        return now()

    def end(self, create, extracted, **kwargs):
        if create:
            self.end = self.start + timedelta(days=1)

    user = factory.SubFactory(UserFactory)

How to use it in tests:

class TestAPI(APITestCase):

    def setUp(self):

        self.url = reverse('api:user:api_task_list')

        # create user

        self.user = UserFactory()

        TaskFactory.create_batch(10, user=self.user)

    def test_user_tasks_list(self):


        self.assertEqual(Task.objects.count(), 20)

        resp = self.client.get(self.url)
        self.assertEqual(resp.status_code, status.HTTP_200_OK)
        self.assertEqual(len(, 10)

This work with database slows down your tests run because before test execution happens the following:

  • Test database is emptied in case it contains some information and in case the permission is granted by a user;

  • All tables and indexes are created in test database;

  • Fixtures sets are loaded;

  • Tests are executed;

  • Everything created during test execution is deleted.

Can we speed everything up?

Well, we can easily change database for sqlite. Tests will be executed a way faster.


Creating test database for alias 'default'...
Ran 4 tests in 113.917s

SQL Lite

Creating test database for alias 'default'...
Ran 4 tests in 67.901s

The speed increased twice as you can see. You should, though, be cautious changing the database, for example, if you are working with something specific, like queryset.Extra. Or you can find yourself up the creek working with DATETIME FORMAT.

  • SQLIte use ISO-8601 date and time format;
  • Postgresql ISO 8601, SQL-compatible, traditional POSTGRES, and others;

You’ll get an error while changing database as well if you add JSONField or ArrayField

$ InterfaceError: Error binding parameter 6 - probably unsupported type.

Your tests should be based on the production server database you are going to use when you work with the database. This will prevent you from having a lot of problems later on.

Code coverage

Code coverage is a measure that determines how much of the application’s source code is being tested. The tool called Coverage is usually used for measuring code coverage. The race for the high percentage of covering results comes to no good. The big number, in this case, isn’t equal to absence of errors. A well-written test should embrace all the cases and errors, it should check all ACLs and services availability.

Code coverage

So how to write tests:

  • Cover all cases;
  • Consider all the variants of errors;
  • Check access rights;
  • Check the validity of the received data.

Following these rules will guarantee you great code coverage.

TDD, or test-driven development

This method lies in:

1. First, you write a test for a given task;

2. Then you write code for this task and test passes;

3. After this, you refactor the code and make it comply with the standards;

4. Finally you repeat the whole process for the next part of the code.

Let’s see how it works via unit testing example:

Suppose we have a task to implement form sending that contains a great amount of sending data with various validations. Suppose it’s REST API and we need to send a complicated JSON.

1. We start from writing a unit test for sending a simple form without attachments. The test won’t work, we get crashes. We write code and we have the test running.

2. We add the test for error validation check which first won’t work either. Then we’ll continue with writing code and doing its refactoring.

3. Then we add an attachment to the form we created. The test won’t work again. So we write code to make it run.

4. Then we want to add test for attachment validation and write code which will allow the execution of the test.

5. We’ll end with code refactoring.

The process described above suggests making some concessive and repeated steps which will result in a functioning code.

Where does the advantage of this method lie? Now imagine you came up with the code that sends this complicated data structure. You’ll spend hours to debug it since you’ll need to fill in the data and send it. As an option, you can resort to dividing data into blocks. However, it will lead you to filling in a great amount of data eventually. While coping with the tasks using TDD will let you fill in the data once, after this you’ll just need to debug the code in accordance with the test.

TDD suggests much more than merely correction check, it can also influence the program’s design. Focused on tests in the beginning, you can more clearly understand what kind of functionality the user needs.

Though you’ll need to write more code using TDD method, the general time consumption for the development turns out to be little. That’s why you’ll decrease the amount of time spent on debugging manyfold. Moreover, the more tests you will write, the fewer errors the code will have.

When to apply TDD?

Every developer, who resorted to the TDD at least once in their career and found this methodology useful, chooses implementation area for it. In our company, we apply TDD only when working with huge amounts of data with complicated structure. This does save time.

Mock: what’s that and why should you apply it?

According to the dictionary, a mock means “an act of imitation”. The module with this name helps to simplify modules testing on Python.

Its operation principle is simple: if you need to test a function, you can substitute everything that doesn’t relate to it with mocks (e.g. reading from disk or network). And you won’t need to adapt these functions for tests: Mock replaces the objects in other modules even if the code doesn’t accept them in the form of parameters. It means that you can execute tests without adapting anything to tests.

So this kind of behaviour is not a toy rocket, it is more of a toy planet where you can fly your test jet planes and rockets. And you use mock package for this. If you use Python 2.7 you’ll just need to install package

$ pip install mock

Versions Python 3.3 and above include mock library which you can use.

A Mock object has a number of attributes with the information about calls:

  • called — shows if the object was called or not
  • call_count — the number of the calls
  • call_args — the arguments of the last call
  • call_args_list — the list of calls
  • method_calls — the track of calls to methods and attributes and their methods and attributes
  • mock_calls — the record of calls to the mock object, its methods, attributes and returned values

To check everything once again for better confidence you can also call one of assert_* methods in automated tests.

You can make your stub smart using side_effect

def small_function(args):

with patch('module.strong_method', side_effect=small_function) as mock_method:

To be more illustrative let’s consider more examples of unit testing:

Suppose we are working on the payment gateway using stripe. After signing up we got all the keys, performed client’s signup and payment. Our next step is to cover this functionality with tests.

Client signup is carried out using stripe library and create method execution for stripe object.


Let’s add registration in stripe on the background alongside regular registration.

Now how can we test it? We can use a stub for this function. We need to patch the function for this. The patch can act as a decorator in the tests.

@patch('stripe.Customer.create', return_value=FAKE_CUSTOMER)
def test_success(self, create_stripe_customer):
    resp =, self.valid_sign_up_data)
    self.assertEqual(resp.status_code, status.HTTP_200_OK)

We can also patch several functions or methods:

@patch("stripe.Customer.create", return_value=deepcopy(FAKE_CUSTOMER))
@patch("stripe.Customer.retrieve", return_value=deepcopy(FAKE_CUSTOMER))
def test_webhook_with_transfer_event(self, event_retrieve_mock, customer_retrieve_mock, customer_create_mock):


    fake_event = deepcopy(FAKE_EVENT_CUSTOMER_CREATED)
    event_retrieve_mock.return_value = fake_event

    resp = Client().post(
    self.assertEquals(resp.status_code, 200)


This tool enables us to test the handcrafted wrappers on third-party API without direct address. In other words, it allows replacing intensive operations with stubs.

When to use Mock:

  • when you need to save resources;

  • when you need to test third-party API wrappers;

  • when you need to replace the result of function execution.

Having highlighted python unit test examples and how they help in the development process, we arrive at the following conclusions.


1. Writing unit tests is compulsory at any time;

2. If you lack time you’d better write at least Smoke Tests to except the evident mistakes;

3. Use TDD approach if there’s a need;

4. If you have third-party API wrappers, use Mock for testing and replacing the result of third-party API operation.

Useful links

  1. Unit testing framework
  2. Factory_boy
  3. Coverage
  4. Unittest.mock — mock object library