ODK 1 Developer Call - 2019-02-06 - Testing strategy goals

ODK 1 Developer calls bring together the developer community to discuss issues that concern us all. Everyone is welcome to participate in these calls.

The calls are held on the first Wednesday of every month on UberConference from 14-15 UTC. We try to send out a reminder on Slack the day before.

We put the agenda, audio, and transcript of every call at https://docs.google.com/document/d/1hszoTRzWG5W04JXgcBzE7BcdEZGB75lA8tftTNZ5lzA


Our next call will be Wed, Feb 6th, from 14-15 UTC (see in your time zone). Will you be there?

  • I will be there!
  • I can't make it this time but I'm interested in future calls

0 voters

We want to take this opportunity to talk about testing strategy goals and how we can go about those in the ODK ecosystem.

If there are other topics you would like to add to the agenda, please comment below. :point_down:

During Feb's dev call we want to talk about test strategy goals and how to articulate them in the different parts of the toolset.

First, let me say that I'm looking forward on hearing about how testing is addressed in the context of Collect, which is the tool I have worked less on. I encourage all Collect devs to show up and share their thoughts on this matter.

For example, in Briefcase we have:

  • Tests that check low-level of abstraction artefacts, e.g. testing a method's output for a given input
  • Tests that verify HTTP interaction, e.g. when calling a method some HTTP call is executed
  • Tests that check for side effects in the interaction with individual small UI components, e.g. testing the behavior of a custom checkbox with indefinite state
  • Tests that drive a whole UI panel to check the interaction between individual components, e.g. a button is disabled while no element is selected from a list

Each type of tests has different goals in mind, and we could add even more types to the list, such as:

  • Outside-in tests that drive the actual app, useful to make smoke tests
  • Acceptance tests that compare the current and previous versions of a particular piece of the application and let a human reviewer decide if they're OK.

All these are automated tests that run during the build workflow, which means that any error would stop the build process and produce some feedback to act upon. They should be fast and the difficulty of writing a test usually grows with the level of abstraction of what you want to test.

We commonly say that the general goal for testing is to ensure a certain level of quality in software, but they bring other benefits too:

  • Writing tests gives feedback about complexity, which can be used to control and, eventually, reduce it.
  • When the behavior of software is properly tested, you can refactor it without being concerned about breaking things.
  • When the tests focus on behavior, they become a tool to document and discuss the behavior itself. This is super useful to ease the onboarding of new developers to a project.

I personally like to see it all the way around. Testing is a design tool that lets me:

  • be aware of complexity by surfacing it where it exists
  • reduce complexity by refactoring the design with a safety net

And the side effect is that I ensure the quality of software along the way.

Considering that no one has shown interest, we'll call this dev call off.