简体   繁体   中英

Perl Unit Testing — is the subroutine testable?

I have been reading up and exploring on the concept of unit testing and test driven development in Perl. I'm looking into how I can incorporate the testing concepts into my development. Say I have a Perl subroutine here:

sub perforce_filelist {

    my ($date) = @_;

    my $path = "//depot/project/design/...module.sv";
    my $p4cmd = "p4 files -e $path\@$date,\@now";

    my @filelist = `$p4cmd`; 

    if (@filelist) {
        chomp @filelist;
        return @filelist;
    }
    else {
        print "No new files!"
        exit 1;
    }
}

The subroutine executes a Perforce command and stores the output of that command (which is a list of files) in to the @filelist array. Is this subroutine testable? Would testing if the returned @filelist is empty useful? I am trying to teach myself how to think like a unit test developer.

There are a couple of things that make testing that perforce_filelist subroutine more difficult than it needs to be:

  • The p4 path is hard-coded
  • The p4 command is constructed inside the subroutine
  • The p4 command is fixed (so, it's always the first p4 in the path)
  • You output directly from the subroutine
  • You exit from inside the subroutine

But, your subroutine's responsibility is to get a filelist and return it. Anything you do outside of that makes it harder to test. If you can't change this because you don't have control of that, you can write stuff like this in the future:

#!perl -T

# Now perforce_filelist doesn't have responsibility for
# application logic unrelated to the file list 
my @new_files = perforce_filelist( $path, $date );
unless( @new_files ) {
    print "No new files!"; # but also maybe "Illegal command", etc
    exit 1;
    }

# Now it's much simpler to see if it's doing it's job, and
# people can make their own decisions about what to do with
# no new files.
sub perforce_filelist {
    my ($path, $date) = @_;
    my @filelist = get_p4_files( $path, $date ); 
    }

# Inside testing, you can mock this part to simulate
# both returning a list and returning nothing. You 
# get to do this without actually running perforce.
#
# You can also test this part separately from everything
# else (so, not printing or exiting)
sub get_p4_files {
    my ($path, $date) = @_;
    my $command = make_p4_files_command( $path, $date );
    return unless defined $command; # perhaps with some logging
    my @files = `$command`;
    chomp @files;
    return @files;
    }   

# This is where you can scrub input data to untaint values that might
# not be right. You don't want to pass just anything to the shell.
sub make_p4_files_command {
    my ($path, $date) = @_;
    return unless ...; # validate $path and $date, perhaps with logging
    p4() . " files -e $path\@$date,\@now";
    }

# Inside testing, you can set a different command to fake
# output. If you are confident the p4 is working correctly,
# you can assume it is and simulate output with your own
# command. That way you don't hit a production resource.        
sub p4 { $ENV{"PERFORCE_COMMAND"} // "p4" }

But, you also have to judge if this level of decomposition is worth it to you. For a personal tool that you use infrequently, it's probably too much work. For something you have to support and lots of people use, it might be worth it. In that case, you might want the official P4Perl API . Those value judgements are up to you. But, having decomposed the problem, making bigger changes (such as using P4Perl) shouldn't be as seismic.


As a side note and not something I'm recommending for this problem, this is the use case for the & and no argument list. In this "crypto context", the argument list to the subroutine is the @_ of the subroutine calling it.

These calls keep passing on the same arguments down the chain, which is annoying to type out and maintain:

    my @new_files = perforce_filelist( $path, $date );
    my @filelist = get_p4_files( $path, $date ); 
    my $command = make_p4_files_command( $path, $date );

With the & and no argument list (not even () ), it passes on the @_ to the next level:

    my @new_files = perforce_filelist( $path, $date );

    my @filelist = &get_p4_files; 
    my $command = &make_p4_files_command;

Whether it's testable depends a lot on your environment. You need to ask yourself the following questions:

  • Does the code depend on a production Perforce installation?
  • Does running the code with random values interfere with production?
  • Does running the code with the same values over and over again always yield the same results?
  • Can the external dependency be unavailable sometimes?
  • Is the external dependency outside of the control of the test?

Some of those things make it very hard (yet not impossible) to run tests for it. Some can be overcome by refactoring the code a little.

It's also important to define what exactly you want to test. A unit test for the function would make sure that it returns the right thing depending on what you put in, but you control the external dependency. An integration test on the other hand would run the external dependency.

Building an integration test for this is easy, but all the questions I have mentioned above apply. And since you have an exit in your code, you cannot really trap that. You would have to put that function in a script and run that and check the exit codes, or use a module like Test::Exit .

You also need to have your Perforce set up in a way that you always get the same results. That might mean to have dates and files there that you control. I don't know how Perforce works, so I cannot tell you how to do that, but in general these things are called fixtures . It's data that you control. For a database your test program would install them before running the tests, so you have a reproducible result.

You also have output to STDOUT, so you need a tool to grab that, too. Test::Output can do that.

use Test::More;
use Test::Output;
use Test::Exit;

# do something to get your function into the test file...

# possibly install fixtures...
# we will fake the whole function for this demonstration

sub perforce_filelist {
    my ($date) = @_;

    if ( $date eq 'today' ) {
        return qw/foo bar baz/;
    }
    else {
        print "No new files!";
        exit 1;
    }
}

stdout_is(
    sub {
        is exit_code( sub { perforce_filelist('yesterday') } ),
            1, "exits with 1 when there are no files";
    },
    "No new files!",
    "... and it prints a message to the screen"
);

my @return_values;
stdout_is(
    sub {
        never_exits_ok(
            sub {
                @return_values = perforce_filelist('today');
            },
            "does not exit when there are files"
        );
    },
    q{},
    "... and there is no output to the screen"
);
is_deeply( \@return_values, [qw/foo bar baz/],
    "... and returns a list of filenames without newlines" );

done_testing;

As you can see, this takes care of all of the things the function does with relative ease. We cover all the code, but we are depending on something external. So this is not a real unit test.

Writing a unit test can be done similarly. There is Test::Mock::Cmd to replace the backticks or qx{} with another function. This could be done manually without that module too. Look at the module's code if you want to know how.

use Test::More;
use Test::Output;
use Test::Exit;

# from doc, could be just 'return';
our $current_qx = sub { diag( explain( \@_ ) ); return; };
use Test::Mock::Cmd 'qx' => sub { $current_qx->(@_) };

# get the function in, I used yours verbatim ...

my $qx; # this will store the arguments and fake an empty result
stdout_is(
    sub {
        is(
            exit_code(
                sub {
                    local $current_qx = sub { $qx = \@_; return; };
                    perforce_filelist('yesterday');
                }
            ),
            1,
            "exits with 1 when there are no files"
        );
    },
    "No new files!",
    "... and it prints a message to the screen"
);
is $qx->[0], 'p4 files -e //depot/project/design/...module.sv@yesterday,@now',
    "... and calls p4 with the correct arguments";

my @return_values;
stdout_is(
    sub {
        never_exits_ok(
            sub {
                # we already tested the args to `` above, 
                # so no need to capture them now
                local $current_qx = sub { return "foo\n", "bar\n", "baz\n"; };
                @return_values = perforce_filelist('today');
            },
            "does not exit when there are files"
        );
    },
    q{},
    "... and there is no output to the screen"
);
is_deeply( \@return_values, [qw/foo bar baz/],
    "... and returns a list of filenames without newlines" );

done_testing;

We now can verify directly that the correct command line has been called, but we do not have to bother with setting up the Perforce to actually have any files, which makes the test run faster and makes you independent. You can run this test on a machine that does not have Perforce installed, which is useful if that function is only a small part of your overall application, and you still want to run the full test suite when you're working on a different part of the app.


Let's take a quick look at the second example's output.

ok 1 - exits with 1 when there are no files
ok 2 - ... and it prints a message to the screen
ok 3 - ... and calls p4 with the correct arguments
ok 4 - does not exit when there are files
ok 5 - ... and there is no output to the screen
ok 6 - ... and returns a list of filenames without newlines
1..6

As you can see it's almost the same as from the first example. I also hardly had to change the tests. Just the mocking strategy was added.

It's important to remember that tests are also code, and the same level of quality should apply to them. They act as documentation of your business logic and as a safety net for you and your fellow developers (including future-you). Clear descriptions of the business case that you are testing is essential for that.

If you want to learn more about the strategy of testing with Perl, and what not to do, I recommend watching the talk Testing Lies by Curtis Poe .

You ask:

Is this subroutine testable?

Yes, it definitely is. However a question instantly arrives; are you doing Development Driven Testing or Test Driven Development? Let me illustrate the difference.

Your current case is that you have written a method earlier than the test, which should drive the development of this function.

If you are trying to follow the basic guidance of TDD, you should write your test case first. In this stage the outcome of your unit test will be red, since there are missing pieces to test for.

Then you write the method with a minimal bits and pieces to make it compile. Now complete the first test case with something that you are asserting from the method you are testing against. If you did it right, your test case is now green, indicating to you that you can now check to see if there are things to refactor.

This will give you the basic principle of TDD, that is; red, green and refactor.

Summarized, you can test and assert for at least two things in your method.

  • Asserting to see if the @filelist is returned and is not empty
  • Asserting the failure case when you return 1

Also make sure that you are unit testing without external dependencies, like a file system etc, because that would be integration testing , which is including other moving parts of the system in your test.

As a final note, as with everything, experience comes through trying and learning. Always ask, at least yourself, then your business peers, to see if you are testing the right thing and if it brings any business value to test that part of the system.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM