Protractor is an end to end test framework for AngularJS applications built on top of WebDriverJS. Protractor runs tests against your application running in a real browser, interacting with it as a user would.

This post discusses 2 advanced features when working with Protractor:

  • Reusing your existing UI services for data creation utilities
  • Capturing browser logs and flushing to file for each test run

Data Creation

When you are writing a new functional test you want to focus on specific business logic.
Suppose you are writing an Employee management application.
You have an employee grid and you want to check that its filters are working properly.
In order to test that you need to setup some initial data.
The process of creating data using the UI is a time-consuming task:
Clicking the "Add" button, waiting a form to appear, filling form fields, clicking the save.

Instead, you can use your existing AngularJS entity creation service to send the REST call to create a new entity without any UI interaction.
That's right, you already wrote a service that knows how to form a REST call to the server to create new employees. Simply use that same service from the test code to setup test data.
Sweet, right?

This is based on Protractor's addMockModule function, which is implemented by adding and augmenting Angular components by using deferred bootstrap.

In your onPrepare function:

        
// New module definition
var dataUtilMockModule = function () {
     // Create a new module which depends on your data creation utilities
    var utilModule = angular.module('dataUtil', ['platform']);
    // Create a new service in the module that creates a new entity
    utilModule.service('EntityCreation', ['EntityDataService', '$q', function (EntityDataService, $q) {

        /**
         * Returns a promise which is resolved/rejected according to entity creation success
         * @returns {*}
         */
        this.createEntity = function (details,type) {
            // This is your business logic for creating entities
            var entity = EntityDataService.Entity(details).ofType(type);
            var promise = entity.save();
            return promise;
        };
    }]);
};

browser.addMockModule('dataUtil', dataUtilMockModule);

// Bootstrap Angular with mock modules
browser.get(browser.params.app);


And that's it!

Then in your test use this EntityCreation service like this:

var populateData = function () {
    var el = document.querySelector(arguments[0]);
    var callback = arguments[1];
    try {
        angular.element(el).injector().get('EntityCreation').createEntity({FirstName: 'Mickey',LastName:'Mouse'},'Cartoon').then(function(data) {
            callback(data);
        }, function(err) {
            callback(err);
        });
    } catch (e) {
        callback(e);
    }
};

browser.driver.executeAsyncScript(
        populateData, browser.rootEl).then(function() {
            console.log('Executed populateData script successfully');
    }, function (browserErr) {
        if (browserErr) {
            throw 'Error while populating data ' + JSON.stringify(browserErr);
        }
    });



And you created a new entity without any UI interaction.

Capturing browser log to file

When you're analyzing the failure of a functional test, the browser console has priceless information for root cause analysis.

This is how you dump it to a file for every test:

In your configuration file, set Selenium's logging preference to ALL:

    
capabilities: {
    'browserName': 'chrome',
    'chromeOptions': {
        'args': ['incognito', 'disable-extensions', 'start-maximized', 'enable-crash-reporter-for-testing']
    },
    'loggingPrefs': {
        'browser': 'ALL'
    }
},


Now create an afterEach function to flush the browser console to a file.
The sample code below also captures a screenshot for every failed test:
var fs = require('fs'),
    path = require('path');

// Add global spec helpers in this file
var getDateStr = function () {
    var d = (new Date() + '').replace(new RegExp(':', 'g'), '-').split(' ');
    // "2013-Sep-03-21:58:03"
    return [d[3], d[1], d[2], d[4]].join('-');
};

var errorCallback = function (err) {
    console.log(err);
};

// create a new javascript Date object based on the timestamp
var timestampToDate = function (unix_timestamp) {
    var date = new Date(unix_timestamp);
    // hours part from the timestamp
    var hours = date.getHours();
    // minutes part from the timestamp
    var minutes = date.getMinutes();
    // seconds part from the timestamp
    var seconds = date.getSeconds();

    var timeValues = [hours, minutes, seconds];
    timeValues.forEach(function (val) {
        if (val.length < 2) {
            // padding
            val = '0' + val;
        }
    });
    // will display time in 10:30:23 format
    return hours + ':' + minutes + ':' + seconds;
};

// Take a screenshot automatically after each failing test.
afterEach(function () {
    var passed = jasmine.getEnv().currentSpec.results().passed();
    // Replace all space characters in spec name with dashes
    var specName = jasmine.getEnv().currentSpec.description.replace(new RegExp(' ', 'g'), '-'),
        baseFileName = specName + '-' + getDateStr(),
        reportDir = path.resolve(__dirname + '/../report/'),
        consoleLogsDir = path.resolve(reportDir + '/logs/'),
        screenshotsDir = path.resolve(reportDir + '/screenshots/');

    if (!fs.existsSync(reportDir)) {
        fs.mkdirSync(reportDir);
    }

    if (!passed) {
        // Create screenshots dir if doesn't exist
        console.log('screenshotsDir = [' + screenshotsDir + ']');
        if (!fs.existsSync(screenshotsDir)) {
            fs.mkdirSync(screenshotsDir);
        }

        var pngFileName = path.resolve(screenshotsDir + '/' + baseFileName + '.png');
        browser.takeScreenshot().then(function (png) {
            // Do something with the png...
            console.log('Writing file ' + pngFileName);
            fs.writeFileSync(pngFileName, png, {encoding: 'base64'}, function (err) {
                console.log(err);
            });
        }, errorCallback);
    }

    // Flush browser console to file
    var logs = browser.driver.manage().logs(),
        logType = 'browser'; // browser
    logs.getAvailableLogTypes().then(function (logTypes) {
        if (logTypes.indexOf(logType) > -1) {
            var logFileName = path.resolve(consoleLogsDir + '/' + baseFileName + '.txt');
            browser.driver.manage().logs().get(logType).then(function (logsEntries) {
                if (!fs.existsSync(consoleLogsDir)) {
                    fs.mkdirSync(consoleLogsDir);
                }
                // Write the browser logs to file
                console.log('Writing file ' + logFileName);
                var len = logsEntries.length;
                for (var i = 0; i < len; ++i) {

                    var logEntry = logsEntries[i];

                    var msg = timestampToDate(logEntry.timestamp) + ' ' + logEntry.type + ' ' + logEntry.message;
                    fs.appendFileSync(logFileName, msg + '\r\n', {encoding: 'utf8'}, errorCallback);
                }
            }, errorCallback);
        }
    });

});


Now you're debugging like a pro!
3

הצג תגובות

  1. this sampe cannot work., If I set framework: 'jasmine2'

    השבמחק
  2. Why access the arguments object by index, rather than using named parameters?

    השבמחק
  1. 1. Writing automation is hard. Really hard. 


    If you don’t get it right it then the level of noise it adds to your project significantly outweighs its benefits.

    a flaky tests affects build stability.
    a flaky build causes Dev organization to lose its trust in the entire testing process to a point where the R&D become indifferent to test failures.
    Flaky tests are eventually ignored & deleted.

    Automation benefits are obviously finding regressions ASAP, but if the number of false positives is large then no one usually notices the real regressions since they’re like a needle in a haystack.

    When coming to choose an automation framework its main goal is to support the authoring of stable test scripts.

    Here are some points to consider:


    • It should help the test author to isolate its test as much as possible. 
    • It should allow for easy setup and tear down of test data thru direct calls to the server, without any UI interaction which is slow and unreliable. 
    • It should promote stability thru better identification of web elements. 
    • It should provide a report for failure root cause analysis with as much information as possible, e.g. logs, screenshots, movies etc. 
    Starting a successful automation project must involve a top developer from day one.
    Automation code should be treated as production code, and should be created by professionals.
    Once the project is alive and running then adding new scripts tends to be copy-pasting existing examples.
    Make sure your core examples are rock solid and demonstrate best practices.


    2. Writing automation is a journey, not a destination 

    Automation projects should never be limited to reaching a goal.
    The minute you stop watching and maintaining your test scripts they are bound to get out of sync and fail.
    Someone needs to constantly monitor the tests status, execution time and reliability, and follow a well-defined triage process for failing tests.
    This person should be dedicated to the process and do this daily.

    3. Not all tests are born equal 

    Assuming the inherent flakiness of UI automation (due to various reasons) you can’t take go/no-go decisions based on the health of your entire test suite.
    Some can never fail since they indicate a basic flaw in the system.
    Some are allowed to fail since the functionality they are testing is unstable.
    You must choose your test suites per development cycle – Run a limited set of 100% stable and important tests on each push, and run additional test suites periodically.

    Don’t forget to track and analyze the “off cycle” test executions.


    4. UI automation is not a silver bullet 

    Don’t expect UI automation to solve all of your problems.
    Allocate time and resources to manual testing, both during the dev cycle and after it is done.
    0

    הוסף תגובה

  2. Check out http://slides.com/eitanpeer/jasmine-2015 for a new revision of my slides on Jasmine.
    Now with Jasmine 2.4.

    I love the work they've done as slides.com. Creating the presentation was easy and fun and the result looks great.
    0

    הוסף תגובה

  3. When Selenium grid is handling more requests than available executors, the new requests become pending and wait for a slave that matches the desired capabilities to become free.
    The data is available on a GET request with body, which can be issued using curl as follows:

    curl -X GET http://selenium_hub_host:4444/grid/api/hub/ -d '{"configuration":["newSessionRequestCount"]}'

    The value of newSessionRequestCount is the number of pending sessions.

    The following Java code snippet checks the number of pending requests on the Selenium grid.


            
        /**
         * Class for sending a GET request with body, see 
         * http://stackoverflow.com/questions/12535016/apache-httpclient-get-with-body
         */
        private static class HttpGetWithEntity extends HttpEntityEnclosingRequestBase {
            public final static String METHOD_NAME = "GET";
    
            @Override
            public String getMethod() {
                return METHOD_NAME;
            }
        }
    
    
        /**
         * Get number of pending requests on Selenium grid
         * @return
         * @throws IOException
         * @throws URISyntaxException
         */
        private Double getSeleniumGridPendingRequestsNum() throws IOException,
                       URISyntaxException {
            HttpGetWithEntity getPendingRequests = new HttpGetWithEntity();
            URL pendingRequests = new URL("http://selenium-grid.host:4444/grid/api/hub"); 
            getPendingRequests.setURI(pendingRequests.toURI());
            String PENDING_REQUEST_COUNT = "newSessionRequestCount";
            getPendingRequests.setEntity(new StringEntity("{\"configuration\":[\""
                    + PENDING_REQUEST_COUNT + "\"]}",
                    ContentType.APPLICATION_JSON));
            HttpClient client = HttpClientBuilder.create().build();
            HttpResponse response = client.execute(getPendingRequests);
            BufferedReader rd = new BufferedReader(new InputStreamReader(
                    response.getEntity().getContent()));
            StringBuffer result = new StringBuffer();
            String line = "";
            while ((line = rd.readLine()) != null) {
                result.append(line);
            }
            Map<Object, Object> responseKeys = 
                    new GsonBuilder().create().fromJson(result.toString(), Map.class);
            Object newSessionRequestCount = responseKeys.get(PENDING_REQUEST_COUNT);
            // Example return value is "1.0"
            return Double.valueOf(newSessionRequestCount.toString());
        }
    
    
    1

    הצג תגובות

  4. Protractor is an end to end test framework for AngularJS applications built on top of WebDriverJS. Protractor runs tests against your application running in a real browser, interacting with it as a user would.

    This post discusses 2 advanced features when working with Protractor:

    • Reusing your existing UI services for data creation utilities
    • Capturing browser logs and flushing to file for each test run

    Data Creation

    When you are writing a new functional test you want to focus on specific business logic.
    Suppose you are writing an Employee management application.
    You have an employee grid and you want to check that its filters are working properly.
    In order to test that you need to setup some initial data.
    The process of creating data using the UI is a time-consuming task:
    Clicking the "Add" button, waiting a form to appear, filling form fields, clicking the save.

    Instead, you can use your existing AngularJS entity creation service to send the REST call to create a new entity without any UI interaction.
    That's right, you already wrote a service that knows how to form a REST call to the server to create new employees. Simply use that same service from the test code to setup test data.
    Sweet, right?

    This is based on Protractor's addMockModule function, which is implemented by adding and augmenting Angular components by using deferred bootstrap.

    In your onPrepare function:

            
    // New module definition
    var dataUtilMockModule = function () {
         // Create a new module which depends on your data creation utilities
        var utilModule = angular.module('dataUtil', ['platform']);
        // Create a new service in the module that creates a new entity
        utilModule.service('EntityCreation', ['EntityDataService', '$q', function (EntityDataService, $q) {
    
            /**
             * Returns a promise which is resolved/rejected according to entity creation success
             * @returns {*}
             */
            this.createEntity = function (details,type) {
                // This is your business logic for creating entities
                var entity = EntityDataService.Entity(details).ofType(type);
                var promise = entity.save();
                return promise;
            };
        }]);
    };
    
    browser.addMockModule('dataUtil', dataUtilMockModule);
    
    // Bootstrap Angular with mock modules
    browser.get(browser.params.app);
    
    

    And that's it!

    Then in your test use this EntityCreation service like this:

    var populateData = function () {
        var el = document.querySelector(arguments[0]);
        var callback = arguments[1];
        try {
            angular.element(el).injector().get('EntityCreation').createEntity({FirstName: 'Mickey',LastName:'Mouse'},'Cartoon').then(function(data) {
                callback(data);
            }, function(err) {
                callback(err);
            });
        } catch (e) {
            callback(e);
        }
    };
    
    browser.driver.executeAsyncScript(
            populateData, browser.rootEl).then(function() {
                console.log('Executed populateData script successfully');
        }, function (browserErr) {
            if (browserErr) {
                throw 'Error while populating data ' + JSON.stringify(browserErr);
            }
        });
    
    
    

    And you created a new entity without any UI interaction.

    Capturing browser log to file

    When you're analyzing the failure of a functional test, the browser console has priceless information for root cause analysis.

    This is how you dump it to a file for every test:

    In your configuration file, set Selenium's logging preference to ALL:

        
    capabilities: {
        'browserName': 'chrome',
        'chromeOptions': {
            'args': ['incognito', 'disable-extensions', 'start-maximized', 'enable-crash-reporter-for-testing']
        },
        'loggingPrefs': {
            'browser': 'ALL'
        }
    },
    
    

    Now create an afterEach function to flush the browser console to a file.
    The sample code below also captures a screenshot for every failed test:
    var fs = require('fs'),
        path = require('path');
    
    // Add global spec helpers in this file
    var getDateStr = function () {
        var d = (new Date() + '').replace(new RegExp(':', 'g'), '-').split(' ');
        // "2013-Sep-03-21:58:03"
        return [d[3], d[1], d[2], d[4]].join('-');
    };
    
    var errorCallback = function (err) {
        console.log(err);
    };
    
    // create a new javascript Date object based on the timestamp
    var timestampToDate = function (unix_timestamp) {
        var date = new Date(unix_timestamp);
        // hours part from the timestamp
        var hours = date.getHours();
        // minutes part from the timestamp
        var minutes = date.getMinutes();
        // seconds part from the timestamp
        var seconds = date.getSeconds();
    
        var timeValues = [hours, minutes, seconds];
        timeValues.forEach(function (val) {
            if (val.length < 2) {
                // padding
                val = '0' + val;
            }
        });
        // will display time in 10:30:23 format
        return hours + ':' + minutes + ':' + seconds;
    };
    
    // Take a screenshot automatically after each failing test.
    afterEach(function () {
        var passed = jasmine.getEnv().currentSpec.results().passed();
        // Replace all space characters in spec name with dashes
        var specName = jasmine.getEnv().currentSpec.description.replace(new RegExp(' ', 'g'), '-'),
            baseFileName = specName + '-' + getDateStr(),
            reportDir = path.resolve(__dirname + '/../report/'),
            consoleLogsDir = path.resolve(reportDir + '/logs/'),
            screenshotsDir = path.resolve(reportDir + '/screenshots/');
    
        if (!fs.existsSync(reportDir)) {
            fs.mkdirSync(reportDir);
        }
    
        if (!passed) {
            // Create screenshots dir if doesn't exist
            console.log('screenshotsDir = [' + screenshotsDir + ']');
            if (!fs.existsSync(screenshotsDir)) {
                fs.mkdirSync(screenshotsDir);
            }
    
            var pngFileName = path.resolve(screenshotsDir + '/' + baseFileName + '.png');
            browser.takeScreenshot().then(function (png) {
                // Do something with the png...
                console.log('Writing file ' + pngFileName);
                fs.writeFileSync(pngFileName, png, {encoding: 'base64'}, function (err) {
                    console.log(err);
                });
            }, errorCallback);
        }
    
        // Flush browser console to file
        var logs = browser.driver.manage().logs(),
            logType = 'browser'; // browser
        logs.getAvailableLogTypes().then(function (logTypes) {
            if (logTypes.indexOf(logType) > -1) {
                var logFileName = path.resolve(consoleLogsDir + '/' + baseFileName + '.txt');
                browser.driver.manage().logs().get(logType).then(function (logsEntries) {
                    if (!fs.existsSync(consoleLogsDir)) {
                        fs.mkdirSync(consoleLogsDir);
                    }
                    // Write the browser logs to file
                    console.log('Writing file ' + logFileName);
                    var len = logsEntries.length;
                    for (var i = 0; i < len; ++i) {
    
                        var logEntry = logsEntries[i];
    
                        var msg = timestampToDate(logEntry.timestamp) + ' ' + logEntry.type + ' ' + logEntry.message;
                        fs.appendFileSync(logFileName, msg + '\r\n', {encoding: 'utf8'}, errorCallback);
                    }
                }, errorCallback);
            }
        });
    
    });
    
    
    
    Now you're debugging like a pro!
    3

    הצג תגובות

    1. this sampe cannot work., If I set framework: 'jasmine2'

      השבמחק
    2. Why access the arguments object by index, rather than using named parameters?

      השבמחק
  5. Check out my latest post on HP Software's dev blog - a technical presentation on how to unit test AngularJS Javascript apps - http://h30499.www3.hp.com/t5/HP-Software-Developers-Blog/Intuitive-AngularJS-testing-with-Jasmine/ba-p/6159427#.Uf_6HG0t3j4
    0

    הוסף תגובה


  6. Check out my very own blog post published on HP Software's Developer channel


    HP Communities - What goes around comes around (Javascript testing ... - Enterprise Business Community
    0

    הוסף תגובה

  7. HP Software has a strong team of professionals publishing technical articles on a wide variety of technologies and methodologies.

    Check it out!

    HP Communities - HP Software Developers Blog - Enterprise Business Community
    0

    הוסף תגובה

  8. Jasmine is a framework for writing unit tests for Javascript code.

    Jasmine spies permit many spying, mocking and faking behaviors.

    There are several ways for creating spies:


    This post describes use cases for choosing the right method to create your spies:

    Use spyOn for existing objects for which you need to spyOn a specific method, e.g. console.log

    Use createSpy for functions called by the code under test which do not have a return value, usually used for callbacks

    Use createSpyObj for interactions of the code under test with its dependencies, usually other classes.

    If you find yourself creating a new object just to call spyOn on its methods you're using the wrong spy creation method. Use createSpyObj instead.

    If you need to add properties to a spy object, then using Javascript makes this the easiest thing in the world:
    // Person is a spy object with getName and getAge methods and an id property
    var person = jasmine.createSpyObj("person", ["getName", "getAge"]);
    person.id = 1234;
    See fiddle

    0

    הוסף תגובה

  9. I've been working on a presentation to co-workers on how to write unit tests for AngularJS components with Jasmine.

    I started by explaining how to write unit tests with Jasmine in general, and then advanced to testing Angular.

    I wrote many JSFiddle examples you might find useful.
    Some of the examples are from jasmine documentation, others are from angularjs documentation and some are my own examples.

    In addition there are 2 exercises for self training, one for Jasmine basics and another for Angular.
    Both exercise solutions are also published.

    Writing unit tests with Jasmine and testing Angular is not really news...
    This page simply concentrates examples for testing techniques in a single page linking to editable, runnable examples in JSFiddle.

    I'd like to take this opportunity to thank the developers of Jasmine, Angular and JSFiddle on their incredible solutions. I'm also a big fan of Karma but I don't have any JSFiddles to show for it :)

    Jasmine
    Angular
    • Angular mocks - module and inject - fiddle
    • httpBackend - fiddle
    • exceptionHandlerProvider, logProvider, timeout - fiddle
    • Promises - this one is hidden in $q documentation - fiddle
    • Filters - fiddle
    • Controllers - using $controller service - fiddle
    • Services - using $provide service - fiddle
    • Directives - using $compile - fiddle
    • Self training - exercise - fiddle
    • Self training - solution - fiddle
    2

    הצג תגובות

  10. I found a workaround for GWT issue 4342 worth sharing:

    If you extend a GWT layout panel, say SplitLayoutPanel, you will probably get an error similar to Type mismatch: cannot convert from CustomSplitLayoutPanel to SplitLayoutPanel.
    The defect and a proposed fix can be found in the GWT issue above.

    In case you need a quick fix, you can work around this issue using a provided UI field, so control on panel creation is in application hands.

    I created a class extending SplitLayoutPanel, called MySplitLayoutPanel.
    Snippet from UiBinder:
    <custom:MySplitLayoutPanel field="panel">
    <custom:center size=...></custom:center>
    </custom:MySplitLayoutPanel>

    Snippet from corresponding Java file:
    @UiField(provided=true)
    MySplitLayoutPanel panel;

    public MyView() {
    panel = new MySplitLayoutPanel();
    // Additional initializations
    initWidget(uiBinder.createAndBindUi(this));
    }

    0

    הוסף תגובה

ארכיון הבלוג
פרטים עלי
פרטים עלי
טוען