TIDS Progress Update – Live Platform Testing

TIDS Progress Update – Live Platform Testing

TIDS Progress Update – Live Platform Testing

Since the last update back in July, we are pleased to announce that the TIDS hardware has successfully been installed onsite at the first of our trial stations. For the next 5 months, the TIDS system will remain in place to assess both its accuracy and how effective the audible deterrent is.

Currently the system is being run without the audible deterrent element in order to ascertain the accuracy of the trespass detection element.  For the duration of the test phase there will be an ongoing process of minimising the frequency of false positives via the alteration of parameters in the software, as well as experimentation with both the “restricted regions” (the areas of trespass detection) and “blanking regions” (the areas where detection does not occur).  If needs be alterations can be made to the physical camera set up in order to capture the key areas of trespass.

As we head into the winter months we will also be able to establish the efficiency of the system in more volatile weather conditions, where detection may be more difficult as a result of changes in light and visibility.

A second installation will soon be under way at the station at a second test site to undergo a similar test set up until the middle of next year.  Once the initial live test phase is complete we will use the findings from the trial to further develop the functionality of the anti-trespass deterrent and how it alerts station staff to trespass events. We will also be looking at developing TIDS as a marketable product.

Keep an eye on the blog for future updates on how the system is coping with the pressures of a live environment.

Update: Second Site Installation

The system has since successfully been installed at the second test site and the testing process is now under way.

More From The Blog

TIDS Progress Update – Live Platform Testing

TIDS Progress Update – Live Platform Testing

TIDS Progress Update - Live Platform TestingSince the last update back in July, we are pleased to announce that the TIDS hardware has successfully been installed onsite at the first of our trial stations. For the next 5 months, the TIDS system will remain in place to...

Improving Rail Safety With Autonomous Trespass Detection

Improving Rail Safety With Autonomous Trespass Detection

Improving Rail Safety With Autonomous Trespass DetectionNetwork Rail and Innovate UK released the Innovation in Railway platform end and edge technology funding call through the Small Business Research Initiative (SBRi) in December 2019. It is known that trespass on...

Armed Forces Day 2019

Armed Forces Day 2019

Armed Forces Day 2019Armed Forces Day 2019 is rapidly approaching, and Zircon is pleased to announce that we are proud supporters of this years National Event. Hosted in Salisbury and running over the course of three days, 28th to 30th June, it is anticipated that...

End Of Year

End Of Year

End Of YearWith the end of 2018 drawing closer, it is an opportunity to reflect upon all that has happened this year and look forward to what 2019 will bring. This year has been a strong year for Zircon, as we continue to grow both in our customer base and our staff...

Another Year Over – 2017 From The Eyes Of Zircon

Another Year Over – 2017 From The Eyes Of Zircon

Another Year Over - 2017 From The Eyes Of ZirconAs we approach the end of 2017 I would like, as is the tradition for these things, to take a look back over the year at Zircon Software.  This has been my first full year heading the company and one that has proven to be...

Zircon Examines Drag Detection At The PTI

Zircon Examines Drag Detection At The PTI

Zircon Examines Drag Detection At The PTI Earlier this month Rome’s metro system was abruptly thrown into the spotlight after a female passenger was dragged along a platform after sensors failed to detect that her bag strap had become trapped in the door of the train....

Exposed to the Elements: Trespass Identification and Deterrent System (TIDS) is put to the Test

Exposed to the Elements: Trespass Identification and Deterrent System (TIDS) is put to the Test

Exposed to the Elements: Trespass Identification and Deterrent System (TIDS) is put to the Test

Last year we announced our participation in an SBRi funding call from Network Rail to produce a system aimed at reducing trespass across the UK rail network. Since then, alongside the continued development of the software element behind the system, we have been busy working on our preparation for the upcoming onsite testing period.

Alterations to Hardware Set Up

In our original outline the plan was to install a pole to which the different pieces of hardware could be attached. Following surveys of the two sites the plan was altered so that the detection equipment would instead be fitted to one of the gantry’s already in place.

Where the original pole installation accounted for the protection of equipment from potential weather and public damage, this was not the case with new gantry plan. There was also a need to minimise the number of separate “installations” to obtain permission for live site testing.

As a solution to the new problem, one of our engineers produced designs for an enclosure that would hold all of the detection equipment, including the camera and speaker. Not only would this provide the necessary level of protection, but would also reduce the number of installations to one.

SOAK Testing

Currently we are undertaking something a bit new to us here at Zircon, with access to the finished casing we have started the process of SOAK testing the final system set up. Up to this point the system has been only been run and tested within the comfort of an office environment, however before it goes into the more volatile environment of an active station we need to have confidence in its ability to run constantly for the duration of the live testing phase regardless of the conditions. This means detecting incidents and running the deterrent regardless of light level and varying weather conditions.

You may have also noticed from the CAD drawings that there are there are gaps in the casing to permit air to flow around the casing to allow for ventilation and to prevent overheating in the CPU’s. The obvious downside to this layout is that it does present an opportunity for moisture to enter or build up within the casing. As a means to prevent water damage each component that makes up the system is IP66 rated and we have ensured that any water that does build up within the casing can drain out of the base. Ensuring that these measures are effective will be another element during this stage of testing.

As you can probably tell we may not have a fancy lab or test site in which to conduct this process, however this has not got in the way of progress. In the last couple of weeks the British weather has stayed true to its reputation and thrown pretty much all it can at us, swinging from beautiful sunshine and heat to heavy rain and humidity. We are pretty pleased to say that, so far under the watchful eye of one of our engineers and their family, there has been no break in the system’s ability to detect intrusions into the defined incursion areas.

As stated previously, this step in the preparation process also provides a means for us to ensure that low light levels will have no impact on the accuracy of detection and sounding of the deterrant system. In order to allow for detection without a light source, the system is has infrared capabilities and we need to establish how this change in image quality could impact detection. So far we have discovered that the system seems to be much more sensitive once the change to infrared occurs. With this in mind we are working on finding the correct level of balance between accurate detection in the two extreme levels of light.

Unexpected Test Helps to Prove The Detection Capability

When you test in a controlled environment like an office it is all to easy to subconciously make sure the system is set up in a way that allows for the best results or, when working with video analytics, to behave in a specific manner that you know the system will be able to pick up on. This is a big part of why we are taking as many steps as we can in the testing of TIDS, to be sure that it will be a system that can be trusted. So it is always nice when you get to have a third party come in to run a test of their own. One of the engineers on the TIDS project was in for quite a suprise one weekend, when he recieved a notification of unauthorised incursion from the unit still located at our office in Trowbridge. Luckily it was not a burglar breaking in, but one of the cleaners happily going about his duties.

The Next Steps

Realistically the next big step forward for the TIDS solution will be it’s installation on an active platform. Currently our aim is to obtain all of the permissions to allow for placement by the end of July with the system set to run for several months. This testing phase should help us establish the effectiveness of the deterrent and give us guidance on how to further improve the system.

Could Video Analytics Make a Difference on your Project?

If you are considering exploring Video Analytics, Machine Learning or AI and how it could enhance your product or system, our team of experts are here to help.

More From The Blog

TIDS Progress Update – Live Platform Testing

TIDS Progress Update – Live Platform Testing

TIDS Progress Update - Live Platform TestingSince the last update back in July, we are pleased to announce that the TIDS hardware has successfully been installed onsite at the first of our trial stations. For the next 5 months, the TIDS system will remain in place to...

Improving Rail Safety With Autonomous Trespass Detection

Improving Rail Safety With Autonomous Trespass Detection

Improving Rail Safety With Autonomous Trespass DetectionNetwork Rail and Innovate UK released the Innovation in Railway platform end and edge technology funding call through the Small Business Research Initiative (SBRi) in December 2019. It is known that trespass on...

Armed Forces Day 2019

Armed Forces Day 2019

Armed Forces Day 2019Armed Forces Day 2019 is rapidly approaching, and Zircon is pleased to announce that we are proud supporters of this years National Event. Hosted in Salisbury and running over the course of three days, 28th to 30th June, it is anticipated that...

End Of Year

End Of Year

End Of YearWith the end of 2018 drawing closer, it is an opportunity to reflect upon all that has happened this year and look forward to what 2019 will bring. This year has been a strong year for Zircon, as we continue to grow both in our customer base and our staff...

Another Year Over – 2017 From The Eyes Of Zircon

Another Year Over – 2017 From The Eyes Of Zircon

Another Year Over - 2017 From The Eyes Of ZirconAs we approach the end of 2017 I would like, as is the tradition for these things, to take a look back over the year at Zircon Software.  This has been my first full year heading the company and one that has proven to be...

Zircon Examines Drag Detection At The PTI

Zircon Examines Drag Detection At The PTI

Zircon Examines Drag Detection At The PTI Earlier this month Rome’s metro system was abruptly thrown into the spotlight after a female passenger was dragged along a platform after sensors failed to detect that her bag strap had become trapped in the door of the train....

Avoiding Repeated Test Behaviour Across Multiple Tests with Test Fixtures

Avoiding Repeated Test Behaviour Across Multiple Tests with Test Fixtures

Avoiding Repeated Test Behaviour Across Multiple Tests with Test Fixtures

For anyone who has had the pleasure of writing/maintaining tests for code that require the use of external dependencies, you will (hopefully!) have incorporated the use of a mock of a function/object. In gMock, you need to define the behaviour of a mock via the EXPECT_CALL & ON_CALL macros in each test before it is used. When multiple tests rely on the same use of a mock, the common method I see far too frequently is to either, copy-paste the mock’s behaviour across each test case that relies it, or just dump all your assertions into a single monolithic test case to avoid copy-pasting. Both are terrible habits to fall into that lead to tests that are brittle and harder to read.

In this guide, I want to demonstrate ways in which you can recognise repeated test setups, and where utilising a Test Fixture class to resolve this can simplify production & maintenance of tests.

Bad Examples of Tests

Consider the following procedural function written in C. Let’s say that we’ve been tasked to write a series of tests on some legacy code that when called retrieves some data from an external module (retrieveExternalData), before passing it on to a function that exists in another external module (sendMessage). For now, ignore the fact that this code is absolute trash and should have been refactored; it’s hard to think of a good example!

Source Code
int functionUnderTest( enum DataType data_type )
{
int error_code;
struct SourceDataStructure external_data;
struct DestinationDataStructure modified_data;
    error_code = retrieveExternalData( &external_data );
     if( error_code == 0 )
{
// Modify data according to the data type
switch( data_type )
{
case 0:
// Modify data one way
...
break;
            case 1:
// Modify data another way
...
break;
            default:
// Return error
error_code = -1;
break;
     if( error_code == 0 )
{
error_code = sendMessage( &modified_data );
}
}
    return error_code;
}

In between these 2 external function calls, some manipulation of data occurs that varies depending on the value of data_type passed into this function. For now our tests will focus on testing that the data passed into ‘sendMessage’ was performed to specification according to the value of data_type.

But because of the fact that there are external dependencies that we need to mock in order to take full control over how the tests will operate, for many of the tests the mocked behaviour may be identical. As a result, different tests may exhibit the same patterns. See for example the following:

Repeating Test Cases
TEST( ModifyingData, ShallSetXWhenDataTypeIs0 )
{
struct SourceDataStructure source_data;
struct DestinationDataStructure output_data;
int error_code;
      //----------------------------- ARRANGE ----------------------------
// Configure relevant variables
source_data.SomeProperty1 = FOO;
source_data.SomeProperty2 = BAR;
      // Initialise output_data to invalid values
output_data.SomeProperty1 = INVALID_VALUE;
output_data.SomeProperty2 = INVALID_VALUE;
      // Configure the mock behaviour of retrieveExternalData
EXPECT_CALL( *_MockObject, retrieveExternalData )
.Times( 1 )
.WillOnce( DoAll( SetArgPointee<0>( source_data ), // Provide the source data when it is called
Return( 0 ) ) ); // Return no error
      // Configure the mock behaviour of sendMessage to capture the modified data
// and store it in output_data to allow us to assert on its properties later
EXPECT_CALL( *_MockObject, sendMessage )
.Times( 1 )
.WillOnce( DoAll( SaveArg<0>( &output_data ), // Capture the modified data
Return( 0 ) ) ); // Return no error
      //------------------------------- ACT ------------------------------
// Call the function under test with data type 0
error_code = functionUnderTest( 0 );
      //------------------------------ ASSERT -----------------------------
// Check that no error was returned
ASSERT_THAT( error_code, 0 );
      // Check that the modified data captured is correct for this data type
ASSERT_THAT( output_data.SomeProperty1, EXPECTED_FOR_DATATYPE_0 );
ASSERT_THAT( output_data.SomeProperty2, EXPECTED_FOR_DATATYPE_0 );
}
TEST( ModifyingData, ShallSetXWhenDataTypeIs1 )
{
struct SourceDataStructure source_data;
struct DestinationDataStructure output_data;
int error_code;
      //----------------------------- ARRANGE ----------------------------
// Configure relevant variables
source_data.SomeProperty1 = FOO;
source_data.SomeProperty2 = BAR;
      // Initialise output_data to invalid values
output_data.SomeProperty1 = INVALID_VALUE;
output_data.SomeProperty2 = INVALID_VALUE;

// Configure the mock behaviour of retrieveExternalData
EXPECT_CALL( *_MockObject, retrieveExternalData )
.Times( 1 )
.WillOnce( DoAll( SetArgPointee<0>( source_data ), // Provide the source data when it is called
Return( 0 ) ) ); // Return no error

      // Configure the mock behaviour of sendMessage to capture the modified data
// and store it in output_data to allow us to assert on its properties later
EXPECT_CALL( *_MockObject, sendMessage )
.Times( 1 )
.WillOnce( DoAll( SaveArg<0>( &output_data ), // Capture the modified data
Return( 0 ) ) ); // Return no error
      //------------------------------- ACT ------------------------------
// Call the function under test with data type 1
error_code = functionUnderTest( 1 );
      //------------------------------ ASSERT -----------------------------
// Check that no error was returned
ASSERT_THAT( error_code, 0 );
      // Check that the modified data captured is correct for this data type
ASSERT_THAT( output_data.SomeProperty1, EXPECTED_FOR_DATATYPE_1 );
ASSERT_THAT( output_data.SomeProperty2, EXPECTED_FOR_DATATYPE_1 );
}

We can clearly see that these tests are almost identical in every way except for the values that we’re actually trying to test for. If the number of data types that change the behaviour of this function was to increase, this would add to the number of test cases that were needed. To make matters even worse, what happens if something changes to the external dependency that affects our code? It would probably lead to a lot of broken test cases and time would be wasted in trying to fix them all. Such pitfalls makes the process of refactoring code far more tedious than it ought to be.

Now the most natural conclusion to make when seeing code repeated would be to place that behaviour in its own function. That would be great, except for the fact that gMock will not allow the behaviour of the mocks to be defined in a function that can be called from a TEST macro. So how can this problem be solved? Thankfully, googletest provides us with…

Test Fixtures

In googletest, a Test Fixture is a class that allows us to control how a group of tests behave. We can use this to define a common behaviour for mocks and initialise data at the beginning of each test without having to continually repeat ourselves.

To begin with, the following is a bare-bones implementation of a test group derived from the TestFixture class:

TestFixture Implementation
class ModifyingData: public TestFixture
{
public:
// Constructor. Allows any data that needs to be created once, but
// used many times, to be initialised here.
ModifyingData() : TestFixture()
{
}
      // Test setup. This gets called before each test is run
void SetUp()
{
}
      // Test tear down. This gets called after each test has run
void TearDown()
{
}
}

With this class, we can now place the default behaviours of the mocks and initialise any data within the SetUp method as follows:

TestFixture with common behaviour
class ModifyingData: public TestFixture
{
public:
struct SourceDataStructure source_data;
struct DestinationDataStructure output_data;
      // Constructor. Allows any data that needs to be created once, but
// used many times, to be initialised here.
ModifyingData() : TestFixture()
{
}
      // Test setup. This gets called before each test is run
void SetUp()
{
// Configure relevant variables
source_data.SomeProperty1 = FOO;
source_data.SomeProperty2 = BAR;
          // Initialise output_data to invalid values
output_data.SomeProperty1 = INVALID_VALUE;
output_data.SomeProperty2 = INVALID_VALUE;
          // Configure the mock behaviour of retrieveExternalData
ON_CALL( *_MockObject, retrieveExternalData )
.WillByDefault( DoAll( SetArgPointee<0>( source_data ), // Provide the source data when it is called
Return( 0 ) ) ); // Return no error
          // Configure the mock behaviour of sendMessage to capture the modified data
// and store it in output_data to allow us to assert on its properties later
ON_CALL( *_MockObject, sendMessage )
.WillByDefault( DoAll( SaveArg<0>( &output_data ), // Capture the modified data
Return( 0 ) ) ); // Return no error
}
      // Test tear down. This gets called after each test has run
void TearDown()
{
}
}

Now for each test run, we no longer have to define the behaviour of the mocks, nor do we have to initialise commonly used data.

You may have also noticed that the mock behaviour is using the ON_CALL method instead of the EXPECT_CALL method. The reason that this is done is that I actually do not want to actually test whether a mocked function is called by default, I just want to define how that mocked function behaves if and when it is called. If I were to use EXPECT_CALL, then I am demanding that all tests for this group must always call these functions. This may not always be the case when tests focus on control flow that may result in sendMessage never being called, causing a test to fail unnecessarily. There’s an interesting read that goes into more detail on why you would use ON_CALL over EXPECT_CALL here: Knowing When to Expect.

With this common setup out of the way, we can refactor the tests so that they are easier to read and far more maintainable. This time, all tests will use the TEST_F macro instead:

Improved Test Cases
TEST_F( ModifyingData, ShallSetXWhenDataTypeIs0 )
{
int error_code;
      //----------------------------- ARRANGE ----------------------------
// Nothing to do here anymore...
      //------------------------------- ACT ------------------------------
// Call the function under test with data type 0
error_code = functionUnderTest( 0 );
      //------------------------------ ASSERT -----------------------------
// Check that no error was returned
ASSERT_THAT( error_code, 0 );
      // Check that the modified data captured is correct for this data type
ASSERT_THAT( output_data.SomeProperty1, EXPECTED_FOR_DATATYPE_0 );
ASSERT_THAT( output_data.SomeProperty2, EXPECTED_FOR_DATATYPE_0 );
}
TEST_F( ModifyingData, ShallSetXWhenDataTypeIs1 )
{
int error_code;
      //----------------------------- ARRANGE ----------------------------
// Nothing to do here anymore...
      //------------------------------- ACT ------------------------------
// Call the function under test with data type 1
error_code = functionUnderTest( 1 );
      //------------------------------ ASSERT -----------------------------
// Check that no error was returned
ASSERT_THAT( error_code, 0 );
      // Check that the modified data captured is correct for this data type
ASSERT_THAT( output_data.SomeProperty1, EXPECTED_FOR_DATATYPE_1 );
ASSERT_THAT( output_data.SomeProperty2, EXPECTED_FOR_DATATYPE_1 );
}

Already, these tests become far easier to read, and are far more maintainable. This simplifies the process of modifying tests when changes to the source code are planned, or allows for quick updates to the behaviour of external dependencies if they are changed.

On a side note, you have probably noticed that the tests still repeat themselves. The good news is that unlike gMock’s macros, googletest’s assertion macros can be placed into a function, with parameters allowing us to define what values we want to assert against. Bad news is that a very useful feature of googletest’s Test Explorer window in Visual Studio becomes slightly less useful. For any test that fails, the Test Explorer allows you to jump to the assertion that failed. If that assertion is within a function, it jumps to the line within that function, making it more difficult to see the overall context in which it failed. However, having less code overall is more beneficial in the long term than the slight annoyance introduced by having assertions within a function. In the interest of good coding practice, let’s refactor the tests to make them even more maintainable:

Improved & Refactored Test Cases
void assertModifiedDataIsCorrect( int error_code,
                                 int expected_value_property1,
                                 int expected_value_property2 )
{
      // Check that no error was returned
    ASSERT_THAT( error_code, 0 );
      // Check that the modified data captured is correct for this data type
    ASSERT_THAT( output_data.SomeProperty1, expected_value_property1 );
    ASSERT_THAT( output_data.SomeProperty2, expected_value_property2 );
}
TEST_F( ModifyingData, ShallSetXWhenDataTypeIs0 )
{
      //----------------------------- ARRANGE ----------------------------
      // Nothing to do here anymore...
      //------------------------------- ACT ------------------------------
      //------------------------------ ASSERT -----------------------------
    assertModifiedDataIsCorrect( functionUnderTest( 0 ),
                                 EXPECTED_FOR_DATATYPE_0,
                                 EXPECTED_FOR_DATATYPE_0 ); 
}
TEST_F( ModifyingData, ShallSetXWhenDataTypeIs1 )
{
      //----------------------------- ARRANGE ----------------------------
      // Nothing to do here anymore...
      //------------------------------- ACT ------------------------------
      //------------------------------ ASSERT -----------------------------
    assertModifiedDataIsCorrect( functionUnderTest( 1 ),
                                 EXPECTED_FOR_DATATYPE_1,
                                 EXPECTED_FOR_DATATYPE_1 ); 
}

Deviating a Mock’s Default Behaviour

It should be clear by now how we can avoid repeating ourselves when defining a mock’s behaviour, but what do you do when testing for edge cases that may need a mock to do something differently? Referring back to the source code, we can observe that there is control flow within the function that will not modify any data when retrieveExternalData returns an error code other than 0:

Source Code: different control flow
error_code = retrieveExternalData( &external_data );
if( error_code == 0 )
{
// Modify data
....
}

The great thing about gMock is that it allows us to create a new behaviour for a mock, and gMock will use the most recent definition that matches, allowing us to use the same Test Fixture. An example of a test case for testing this control flow could be as follows:

Overriding Default Mock Behaviour
TEST_F( ModifyingData, ShallNotOccurWhenExternalDataIsNotAvailable )
{
int error_code;
int expected_error_code = -1;
      //----------------------------- ARRANGE ----------------------------
// Configure the mock behaviour of retrieveExternalData
ON_CALL( *_MockObject, retrieveExternalData )
.WillByDefault( Return( expected_error_code ) ) ); // Return an error
      // Ensure that data is not passed to sendMessage
EXPECT_CALL( *_MockObject, sendMessage )
.Times( 0 );
      //------------------------------- ACT ------------------------------
error_code = functionUnderTest( 0 );
      //------------------------------ ASSERT -----------------------------
ASSERT_THAT( error_code, expected_error_code );
}

 

In the above example, the mock’s behaviour in ON_CALL is defined after the one in the Test Fixture’s SetUp method, so it will take precedence over the one in SetUp.

It is also interesting that ON_CALL and EXPECT_CALL both follow this rule. This means that the most recent EXPECT_CALL can take precedence over an ON_CALL for the same mock, and vice-versa. This is useful as demonstrated in the above example, where it is important that we assert that sendMessage is never called when the function under test fails to retrieve any data from retrieveExternalData. We achieve this by specifying the cardinality of the mocked function’s call as 0 ( .Times(0) ).

Other Tests to Consider

With this knowledge, consider how the default behaviours would differ when testing the following edge cases, and whether ON_CALL or EXPECT_CALL would be appropriate:

  • Passing a data_type outside of the range accounted for within this function. The function retrieveExternalData would still be called in the same way, but what happens to sendMessage?
  • How could a test be written in the same Test Fixture that forces sendMessage to return an error?

When Not To Deviate Default Mock Behaviour

If you ever find yourself having to deviate a mock’s default behaviour in the same way more than once, it’s probably a good time to ask yourself whether the Test Fixture has done everything that it can reasonably do. To avoid falling into the same trap and repeating yourself, it is probably a good idea to create a new Test Fixture and defining the default behaviour for those mock(s) in its SetUp method. At the end of the day, the less code there is to maintain, the better it is for everyone. If it helps, always pretend that the person lumped with the misfortune of maintaining your code is a psychopath who knows where you live…

More From The Blog

Do It Right First Time? Or, Fight The Fire Later?

Do It Right First Time? Or, Fight The Fire Later?

Do It Right First Time? Or, Fight The Fire Later?When I was a fledgling engineer, the company I worked for hired a new Technical Director.  I remember it vividly because one of his first presentations, to what was a very large engineering team, made the statement...

Standard[ised] Coding

Standard[ised] Coding

Standard[ised] CodingRecently I was handed a piece of code by a client “for my opinion”. A quick glance at the code and I was horrified! This particular piece of code was destined for a SIL0 subsystem of the safety critical embedded system that we were working on. Had...

To Optimise Or Not To Optimise …

To Optimise Or Not To Optimise …

To Optimise Or Not To Optimise ...Computers today are faster than at any time in history. For instance,  the PC on which this blog is being typed is running a multitude of applications, and yet, this concurrency barely registers as the characters peel their ways...

Test Driven Development: A Silver Bullet?

Test Driven Development: A Silver Bullet?

Test Driven Development: A Silver Bullet?Test Driven Development (TDD) is a software development process that started around the early Noughties and is attributed to Kent Beck. The basic concept with TDD is that a test is written and performed first (obviously fails)...

Ticking The Box – Effective Module Testing

Ticking The Box – Effective Module Testing

Ticking The Box - Effective Module TestingIn the world of software development one of the topics of contention is Module Testing. Some value the approach whilst others see it as worthless. These diametrical opposed views even pervade the world of Safety Critical...

Ruminations on C++11

Ruminations on C++11

This is a blog about the new version of C++, C++11. It has been many years in the making and adds lots of new features to the C++ language. Some of those features are already supported by commonly used compilers, a description of these features follows.

IR35, Here it Comes Again…

IR35, Here it Comes Again…

IR35, Here it Comes Again…

In 2021 the reform to IR35 Off-Payroll rules is to be rolled out to the private sector. As before the reform will only affect companies that do not meet the following attributes:

  • an annual turnover below £10m
  • fewer than 50 employees or
  • a balance sheet showing less than £5.1m in assets

Any company unable to meet these criteria will now be responsible for determining the IR35 status of its workers. This will also include the status of contractors already under contract before the introduction of the reform.

Whenever a contractor is deemed to be within IR35, through an assessment process or assistance from the official government CEST tool, the fee-payer will be expected to deduct tax and national insurance at source via PAYE. Any company engaging with contractors directly will be considered the fee-payer. Care should be given that the correct status is determined for each contractor on a case by case basis, as the fee-payer will be liable for the tax and National Insurance owed should HMRC disagree with a given status.

Should contractors want to retain their position, or take on a position that is deemed to be inside IR35, it could be a possibility that they will seek an increase in payment rates. This increase will allow them to continue taking home the same level of income as they did before the reform. However, you must also be prepared for some existing contractors to terminate their contracts to seek an outside IR35 contract elsewhere.

Alternatively, there is the option to obtain contractors through umbrella companies. By working in this way the responsibility of tax payments falls onto the shoulders of the umbrella company, as the contractor is employed by the umbrella for the duration of the contract.

More From The Blog

IR35, Here it Comes Again…

IR35, Here it Comes Again…

IR35, Here it Comes Again...In 2021 the reform to IR35 Off-Payroll rules is to be rolled out to the private sector. As before the reform will only affect companies that do not meet the following attributes: an annual turnover below £10m fewer than 50 employees or a...

Solving the Resource Conundrum

Solving the Resource Conundrum

Solving the Resource ConundrumPicture this. One minute all is fine and dandy, you have access to all the resources you could possibly need, then bam an unexpected challenge arises. Suddenly you find yourself lacking the capacity to meet the new need. What are your...

Quality – An Aid to Produce Consistent Rubbish

Quality – An Aid to Produce Consistent Rubbish

Quality - An Aid to Produce Consistent RubbishAnother year has passed, and myself and a colleague have hosted a BSI auditor for our annual ISO9001/TickITplus check-up, and in fact this was more than the regular check, in that it was our 3-year re-certification audit,...

The Hazards of Legacy Systems

The Hazards of Legacy Systems

The Hazards of Legacy SystemsBeing the owner of a software system with a dedicated customer base sounds like the kind of position one would like to find themselves in. At least until it gets superseded and you have to face dealing with a legacy system. Many developers...

How to Test Without Access to The Test Environment

How to Test Without Access to The Test Environment

How to Test Without Access to The Test EnvironmentIn many of our previous articles, we have expressed the importance of achieving a high standard of testing. Potentially blocking this achievement, several factors can come together to affect the quality of your...

The Technical Workshop – How To Make Them Work For You

The Technical Workshop – How To Make Them Work For You

The Technical Workshop - How To Make Them Work For YouAnyone experienced in product design will understand just how valuable a facilitated workshop can be. Bringing together a project's key stakeholders into a single space allows for the exploration of diverse...

Developing Software for Safety Related Systems

Developing Software for Safety Related Systems

Developing Software for Safety Related SystemsSoftware systems should always be both robust and reliable, however the moment you introduce a safety element, this need for reliability increases significantly. The level of safety required is governed by the severity and...

How to Choose an Outsourcing Partner

How to Choose an Outsourcing Partner

How to Choose an Outsourcing PartnerHaving recognised a need to outsource, and worked your way through the initial preparations, you are now in a strong position to seek out a suitable partner. Choosing an outsourcing partner is no trivial affair, so taking the time...

Solving the Resource Conundrum

Solving the Resource Conundrum

Solving the Resource Conundrum

Picture this. One minute all is fine and dandy, you have access to all the resources you could possibly need, then bam an unexpected challenge arises. Suddenly you find yourself lacking the capacity to meet the new need. What are your options?

Typically when companies find themselves facing this conundrum, three possible options are laid out in front of them:

1) They can bolster the number of permanent internal staff

Certainly a viable solution where demand has increased and is not expected to diminish again for the foreseeable future. Yet, not so great a solution for handling short-term issues or seemingly random spikes in demand.

2) They can seek the expertise of independent contractors

A bit of a reverse to the permanent staff choice. A valuable option for a short-term solution to manage demand or bring in relevant expertise to solve a technical issue. Not as suitable for serious long-term demand, especially with the approaching presence of reforms to IR35 legislation.

3) They can outsource the responsibility of certain elements or projects to another company, either locally or offshore

This option seems to fit somewhere in the middle of the previous choices. A more than adequate solution to resolving short bursts of issues or demand, yet also capable of running long-term. Especially where the length of increased demand is uncertain or the idea of taking on more permanent staff is unappealing.

As things stand, there is an overwhelming consensus that localised outsourcing is still considered to be the least appealing for many organisations. While we can certainly understand where this opinion may have come from, as a trusted outsource partner for several large UK companies, we have some views on this topic and would like to try and counter some of the arguments we hear regularly against outsourcing.

Before we get down into it lets get the most anticipated and repeated argument out of the way. If you go by the headline day rate alone, outsourcing will definitely not come in first, but one should also stop to consider the potential ‘hidden’ costs associated with other options.

  • The initial recruitment costs required to add members of staff to your team
  • The ongoing cost of permanent employee payroll and miscellaneous benefits i.e. sick pay and pension contributions
  • The management overheads
  • The possibility of reworking low-quality code produced by offshore teams

When you take each of these points into account the cost differential begins to become less of a factor, but the frequently ignored additional benefits also remain.

In regards to contracting, a somewhat unexpected downside is that it is only a temporary measure. What seems like a big upside may actually turn out to be more of a negative than initially thought. When a contractor leaves they will be taking their knowledge with them. If not documented correctly it is all too easy for key knowledge to go missing. You are also faced with the fact that a contractor will not wait around until you require their services again. They have to keep earning a living after all so there is no guarantee that they will be available as and when you need them. Outsourcing to another company can essentially eliminate these risks. With ready access to a team of people, the capacity to share knowledge and train up other team members, should one engineer become unavailable there will already be another capable of filling this space.

In instances where there are concerns regarding intellectual property rights or other sensitive information, it is not uncommon for the preference to be on scaling internal resources. By keeping product development and maintenance within the company boundary, you will create a sense of security over the likes of IPR. However, a trustworthy outsourcing partner will understand and recognise that IPR will remain with the client. They will willingly hand over items such as source code, documentation etc. to the client upon completion.

As an overlooked additional benefit, outsourcing project work opens you up to the experiences and knowledge of every engineer employed by that company. You don’t limit yourself to just one person. Just because a particular individual hasn’t been offered as a resource for your project doesn’t mean that they will not offer their advice or insights to those who are.

Of course, before we bring this article to its conclusion, we cannot talk about outsourcing without at least mentioning IR35. Should you decide to head down the path of recruiting contractors you should be cautious of IR35, and the potential repercussions should you be found in breach of its terms. Should HMRC deem a contract to be in breach of this legislation, the employer is required to provide the employer contributions that would have arisen during a ‘contractors’ term of employment. They will subsequently be expected to continue making these contributions should the contractor turned employee remain with the company. In the lead up to the original 2020 reform deadline, many companies made the decision to place blanket bans on the recruitment of PSC’s. Though in some cases this decision has been revoked in the wake of the 2021 delay, the potential impact to contracting has yet to be fully realised. This now begs the question as to the impact IR35 could have regarding opinions on outsourcing? Maybe the once unpopular choice may gain some favour post IR35? Or maybe the recruitment of permanent staff will become the new norm? We shall just have to wait and see what the future brings.

More From The Blog

IR35, Here it Comes Again…

IR35, Here it Comes Again…

IR35, Here it Comes Again...In 2021 the reform to IR35 Off-Payroll rules is to be rolled out to the private sector. As before the reform will only affect companies that do not meet the following attributes: an annual turnover below £10m fewer than 50 employees or a...

Solving the Resource Conundrum

Solving the Resource Conundrum

Solving the Resource ConundrumPicture this. One minute all is fine and dandy, you have access to all the resources you could possibly need, then bam an unexpected challenge arises. Suddenly you find yourself lacking the capacity to meet the new need. What are your...

Quality – An Aid to Produce Consistent Rubbish

Quality – An Aid to Produce Consistent Rubbish

Quality - An Aid to Produce Consistent RubbishAnother year has passed, and myself and a colleague have hosted a BSI auditor for our annual ISO9001/TickITplus check-up, and in fact this was more than the regular check, in that it was our 3-year re-certification audit,...

The Hazards of Legacy Systems

The Hazards of Legacy Systems

The Hazards of Legacy SystemsBeing the owner of a software system with a dedicated customer base sounds like the kind of position one would like to find themselves in. At least until it gets superseded and you have to face dealing with a legacy system. Many developers...

How to Test Without Access to The Test Environment

How to Test Without Access to The Test Environment

How to Test Without Access to The Test EnvironmentIn many of our previous articles, we have expressed the importance of achieving a high standard of testing. Potentially blocking this achievement, several factors can come together to affect the quality of your...

The Technical Workshop – How To Make Them Work For You

The Technical Workshop – How To Make Them Work For You

The Technical Workshop - How To Make Them Work For YouAnyone experienced in product design will understand just how valuable a facilitated workshop can be. Bringing together a project's key stakeholders into a single space allows for the exploration of diverse...

Developing Software for Safety Related Systems

Developing Software for Safety Related Systems

Developing Software for Safety Related SystemsSoftware systems should always be both robust and reliable, however the moment you introduce a safety element, this need for reliability increases significantly. The level of safety required is governed by the severity and...

How to Choose an Outsourcing Partner

How to Choose an Outsourcing Partner

How to Choose an Outsourcing PartnerHaving recognised a need to outsource, and worked your way through the initial preparations, you are now in a strong position to seek out a suitable partner. Choosing an outsourcing partner is no trivial affair, so taking the time...