Friday, August 8, 2014

Configure android App with ANT



CONFIGURE THE ANDROID WITH ANT

1.set the androiod sdk path

   "PATH=/Users/Vinothini/Downloads/android-sdk-macosx/tools/:$PATH"

2.copy the file and create two dir as "sample.app" and "sample.test"

3.Go to your project directory with console and run the following command:
                   “ android update project -p .“
Note that �.� (dot) after -p (path to project) flag, means �current folder�.
It will update and rebuild your Android project files and create build.xml file, which will be used by ANT to build your project.

4.Then go to your test project folder with console and run update command for the test project:

“android update test-project -m _PATH_TO_ANDROID_PROJECT -p _TEST_PROJECT_PATH_”

At this point, we have two projects with prepared and build.xml files.

                   Let�s try to build them!

5.Go to your project folder and run:

                      “ant clean debug”
This command should start compilation.


6. Afterwards, cd to your test project folder, start Android emulator and run in console:

                 “ant all clean emma debug install test”


REFERENCE:

http://www.ontestautomation.com/running-selenium-webdriver-tests-in-jenkins-using-ant/

http://blackriver.to/2012/02/android-continuous-integration-with-ant-and-jenkins-part-1/

Jmeter Installation and Start-Up



Step 1 - verify Java installation in your machine


Now, open console and execute the following java command.

OS Task Command
Windows Open Command Console - c:\> java -version
Linux Open Command Terminal - $ java -version
Mac Open Terminal machine:~ - joseph$ java -version


Step 2: Set JAVA environment

Windows Set the environment variable JAVA_HOME to C:\Program Files\Java\jdk1.7.0_25
Linux - export JAVA_HOME=/usr/local/java-current
Mac - export JAVA_HOME=$(/usr/libexec/java_home) [For OX 10.5 later]

Step 3: Download Jmeter

Download latest version of JMeter from http://jmeter.apache.org/download_jmeter.cgi.

Step 4: Run Jmeter

Windows - jmeter.bat
Linux - jmeter.sh
Mac - jmeter.sh (in console run sh ./jmeter.sh)



Reference :http://www.tutorialspoint.com/jmeter/jmeter_quick_guide.htm

Monkey Talk




Monkey Talk Installation :

1. Download the zip file here. This download contains the IDE, the Agents (which you will need for the next step), and sample applications.
2. Unzip it wherever you want, but remember where you put it because you'll need it later.
3. On OS X, move the entire MonkeyTalkIDE folder into your applications folder and double click on MonkeyTalk.app to run.
    4. On Windows, move the entire MonkeyTalkIDE folder into your Program Files folder and double click on MonkeyTalk.exe to run. (Note: If you choose to put MonkeyTalkIDE in your Program Files folder, you will have to choose a different location for your workspace.)


For Android Applications:

Open your Android Project in Eclipse and follow these instructions.

1. Convert your Android project to AspectJ
2. MonkeyTalk-agent.jar can be found in the "agents" folder in the MonkeyTalk package you downloaded earlier, and can be downloaded here. The exact name of the jar might vary depending on the version, but it should always start with "MonkeyTalk-agent".
3. Create a "libs" folder in your Android project, if you don't already have one.
4. Copy the .jar into the libs folder
5. Right click on MonkeyTalk-agent.jar > AspectJ Tools > Add to Aspectpath.
6. Update your AndroidManifest.xml to include the following two permissions:
             android.permission.INTERNET ,
                       android.permission.GET_TASKS
7. Update the project properties (right-click on the project > Properties > Java Build Path), select the Order and Export tab, and check the checkbox next to the AspectJ Runtime Library to export it:
    8.Deploy your application to an Android device or emulator.
  Copy the ant jar file from monkey talk folder in to “Apache-ant/lib” folder




Configure Xcode (Iphone App):
  1. Download and unzip the MonkeyTalk zip file for your OS.
  2. Open your application's project in Xcode.
  3. Duplicate your application's build target by right-clicking on it and selecting Duplicate from the menu. A new target will be created called YourApp copy.
        Rename YourApp copy to something like YourAppMonkey
      You may also want to rename the Scheme from the Manage Schemes window
    4.Add the downloaded MonkeyTalk lib to your project File > Add to “YourApp”... from the menu.
(When the dialog box appears, navigate to the directory where you unzipped the MonkeyTalk zip file, and select the MonkeyTalk iOS lib from pathToMonkeyTalkFolder/agents/iOS. )
   5. In the Add to Targets box, deselect YourApp and select YourAppMonkey. And click add
   6. Configuring Libraries and Build Settings
    Right-click on the YourAppMonkey build target, and select the Build Phases tab.
    On the Link Binaries With Libraries tab, you will need to add libsqlite3.dylib CFNetwork.framework and QuartzCore.framework and libstdc++.6.0.9.dylib (Xcode5 only)
    if your application is not already using them. (These frameworks are required by the MonkeyTalk).
   7. On the Build Settings tab, scroll down to the Linking section and add to your Other Linker Flags:
                                   -all_load -lstdc++

   8. Choose your duplicated test target from the Scheme menu in Xcode and Run on the Simulator or Device. 

Friday, August 1, 2014

Manual Testing


Manual Testing Concepts
  • Technical Factors:
  • Meet customer requirements in terms of Functionality
  • Meet customer expectations in terms of Performance, Usability, Security, etc…
  • Non-Technical Factors:
  • Reasonable cost to purchase.
  • Time to release.

1.1 Software Quality Assurance (SQA):

     The Monitoring and Measuring the strength of development process is called Software Quality Testing. Ex: Life Cycle Testing.

1.2 Software Quality Control (SQC):



       The validation of final product before releasing to the customer is called as SQC.

2. Software Development process:

        Fish model software development: Upper angle is life cycle development and lower angle is life cycle testing.


BRS: Business requirement specification defines the requirement the customer to be developed as software. This document is also known as Customer Requirement Specification (CRS) or User Requirement Specification (URS).

SRS: Software requirement specification defines functional requirements to be developed and system requirement to be used (Hardware and Software).
Example:BRS defines addition (Customer requirement).
SRS defines how to solve customer requirement.

Review: It is a static testing technique. In this review responsible people will estimate 
completeness and correctness of corresponding documents.

HLD: High Level Design document defines the overall architecture of the system from root functionalities to leaf functionalities. This HLD is also known as Architectural Design or External Design.

LLD: Low Level Design document defines the internal logic of corresponding module (or) functionality. The LLD is also known as Internal Logic Design document.

Prototype: A sample model of an application without functionality is called prototype.
Program: A set of execute statements is called a Program.
Module: A set of programs is called as a Module or Unit.
Build: The set of modules is called as Software Build or Product.

White Box Testing: It is a coding level testing technique to verify the completeness and correctness of program structure. Programmers will follow this technique. It is also known as Glass Box Testing (or) Clear Box Testing (or) Open Box Testing.

Black Box Testing: It is a build level testing technique. In this testing test engineers will validate every feature depending on external interface.

Software Testing: The Verification and Validation of a software application is called software testing.
Verification: Are we building the product right?
Validation: Are we building the right product?

3.1 V model

        V stands for Verification and Validation. This model defines mapping between development process and testing process.

The real V-model is expensive to follow for small and medium scale organizations. Due to this reason, small and medium scale organizations maintains separate testing team for System Testing phase.

3.2 Reviews during Analysis

     In general the software development process starts with requirements gathering and analysis. In this phase business analyst category people will develop BRS and SRS. After development of the documents the same business analyst category people will conduct review meetings to estimate completeness and correctness of the documents. In this review meeting, the same business analyst category people will concentrate on below checklist. BRS รจ SRS.
  • Are the requirements correct?
  • Are the requirements complete?
  • Are they achievable (w.r.t Technology)?
  • Are they reasonable (w.r.t Time)?
  • Are they testable?

3.3Reviews during Design


      After completion of analysis and then reviews the design category people will develop HLD’s and LLD’s. The same design category people will conduct review meetings to estimate completeness and correctness of the design documents. In the review the same design category people will concentrate on below checklist HLD รจ LLD.
  • Does the design understandable?
  • Are the correct requirements met?
  • Does the design complete?
  • Does the design follow able (w.r.t Coding)?
  • Do they handle errors?

3.4 Unit Testing


     After completion of design and their reviews, programmers will concentrate on coding to construct software physically. In this phase programmers will test every program through a set of white box testing techniques w.r.t LLD.
  • Basis paths testing.
  • Control structure testing.
  • Program technique testing (Time).
  • Mutation Testing

3.4.1Basis Path Testing


     In this coverage programmers will verify the execution of program without any syntax and run time errors. In this coverage programmers will execute a program more than one time to cover all areas of that program coding while running.

3.4.2Control Structure Testing


    In this coverage programmers will concentrate on correctness of the program functionality. In this coverage programmers will check statements in the program including variables declaration, IF conditions, Loops, etc….

3.4.3Program Technique Coverage


       In this coverage programmers will verify the execution time of program to improve speed in processing. If the execution time is not reasonable then the programmers will change the structure of the program without disturbing functionality.

3.4.4Mutation Testing


      After completion of a program testing, the corresponding programs will review the completeness and correctness of the program testing. Mutation means that a change in coding of the program, in this Mutation testing programmers will perform changes in various areas in the program and repeat previously completed tests. If all the tests are passed on the changed program, then the program will continue testing on some program. If any one of the tests is failed on the change in program, then the program will concentrate on further coding.

Note: in white box Testing techniques, the first 3 techniques will test program code and the mutation testing will estimate the completeness and correctness of the test on the program.

3.5Integration Testing


      After completion of dependent programs development and unit testing, programmers will inter connect the programs to construct a complete software build. In this stage programmers will verify integration of programs in four types of approaches.
  • Top Down Approach.
  • Bottom Up Approach.
  • Hybrid Approach.
  • System Approach.

3.5.1Top Down Approach

       In this approach the programmers will inter connect main models to some of the modules in the place of remaining sub modules programmers will use temporary programs called Stubs.

3.5.2Bottom up Approach

     In this approach the programmers will inter connect sub modules without connection to the main module. Programmers will use a temporary program instead of main module called Driver.

3.5.3Hybrid Approach

    It is a combined approach of Top Down and Bottom Up approaches. This approach is also known as Sandwich approach.

3.5.4 System Approach

   It is also known as Final Integration (or) Big Bang Approach. In this integration programmers will inter connect programs after completion of total development.
Note: In general the programmers will inter connect programs through any one of the above methods depending on circumstances.

3.6 System Testing


   After completion of Integration Testing, and receiving the build from development team, the testing team will concentrate on system testing to conduct using Black Box Testing techniques.
System Testing is divided into 3 sub stages.
  • Usability Testing.
  • Functional Testing
  • Non-Functional Testing.

3.6.1 Usability Testing


    After receiving software build from development team, the testing team will conduct usability testing. In this test the testing team will estimate “User Friendly ness” of all screens in the software build. There are two sub tests.

3.6.1.1User Interface Testing or UI Testing

     In this test, the testing team will apply below 3 factors on every screen of the software build.

·Ease of use: To estimate understandability of screen.
·Look and Feel: To estimate attractiveness of screen.
·Speed in Interface: To estimate length of navigation as short.

3.6.1.2 Manual Support Testing

   During this test the testing team will validate the correctness and completeness of help documents. These help documents are also known as User Manuals.

3.6.2 Functional Testing

     It is a mandatory testing level in testing team responsibilities. During this test, testing team will concentrate on “Meet customer Requirements” through below sub tests.
  • Requirement Testing.
  • Sanitation Testing.

3.6.2.1 Requirements Testing

    It is also known as Functionality Testing. During this test the responsible testing team will apply different coverage techniques as discussed below on the functionalities of software build.

·GUI Coverage / Behavioral Coverage: Changes in properties of objects in screens while operating.
·Error Handling Coverage: To prevent wrong operation on screens.
·Input Domain Coverage: Testing correct type and size of input values
·Manipulations Coverage: Returning correct output values.
·Back End Coverage: Valid impact of screens operations on back end data base tables.
·Functionalities Order Coverage: The arrangements of screens in the software build with respect to order of functionalities.

3.6.2.2Sanitation Testing

    During this test the testing team will concentrate on extra functionalities with respect to requirements of the customer. This testing is also known as garbage testing.

3.6.3Non-Functional Testing


     After completion of user interface and functional testing, the testing team will concentrate on Non-Functional Testing to validate quality characteristics of software build Like Security and Performance.

3.6.3.1 Recovery Testing

    This testing is also known as Reliability Testing. During this test, the testing team will validate that whether the software build is changing from abnormal state to normal state.

3.6.3.2 Compatibility Testing

     It is also known as Portability Testing. During this test, the testing team will validate that whether the software build is running on the customer expected platforms or not?.

3.6.3.3 Configuration Testing

     It is also known as hardware compatibility testing. During this test the testing team will validate that whether the software build is supporting different technology hardware devices or not?
Example:Different technology printers.
Different topology networks, etc….

3.6.3.4 Inter Systems Testing

       It is also known as End-to-End Testing. During this test the testing team will validate that whether the software build co-exists with other software applications to share common resources.
Example:Sharing data, sharing hardware devices, printers, speakers, sharing memory, etc….

3.6.3.5 Installation Testing

    During this test, the testing team will establish customer site like configured environment. The testing team is practice installation of software build in to that environment.

3.6.3.6Load Testing

     The execution of the software build under customer expected configuration and customer expected load to estimate speed of processing is called as load testing. Here, load means that the no of concurrent users working on the software. This is also known as scalability testing.

3.6.3.7 Stress Testing

    The execution of the software build under customer expected configuration and various load levels from low to peak is called stress testing. In this testing, testing team will concentrate on load handling by the software build.

3.6.3.8 Storage Testing

     Testing whether the system meets its specified storage objectives.
Testing the data of different formats and in different devices. Verifying the efficiency of data storage in devices and proper retrieval of the data.

3.6.3.9 Data Volume Testing

     Volume testing refers to testing a software application with a certain amount of data. This amount can, in generic terms, be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application’s performance on it.
Example:MS Access technology support 2GB of database as maximum.

3.6.3.10 Parallel Testing

     It is also known as Comparative Testing. During this test, the testing team will compare the software build with other competitive software in market or with old version of same software build to estimate completeness. This is applicable only for software product but not on software applications.

3.7 User Acceptance Testing (UAT)


     After completion of all possible Functional and Non-Functional tests, the project manager will concentrate on user acceptance testing to collect feedback from customer site people. There are two approaches to conduct UAT, such as a-Test and b-Test.
a-Test
b-Test
1.Software applications.
2.At development site.
3.By real customers.
1.Software products.
2.In customer site like environment.
3.By customers site like people.
Collect Feedback

3.8 Testing during maintenance


    After completion of user acceptance test and their modifications, project management concentrate on release team formation with few developers, few testers and few hardware engineers. This release team is coming to customers site and conduct port testing.
During this port testing release team concentrate on below factors in customers site.
  • Compact installation
  • Overall functionality
  • Input devices handling
  • Output devices handling
  • Secondary storage devices handling
  • Co-existence with other software to share common resources
  • Operating system error handling
After completion of port testing, release team provides training sessions to customer site people.
During utilization of the software, customer site people are sending change request to our organization. There are two types of change request to be solved.

    4.Testing terminology

Monkey Testing:


     A Test Engineer conducting a test on application build through the coverage of main activities only is called monkey testing or chimpanzee testing.

Exploratory Testing:

     A Tester conducts testing on an application build through the coverage of activities in level by level.

Ad-Hoc Testing:

    A Tester conducts a test on application build with respect to pre determined ideas is called ad-hoc testing.

Big bang Testing:

   An organization conducting a single stage of testing after completion of entire modules development is called big bang testing or informal testing.

Incremental Testing:

    An organization follows the multiple stages of testing process from document level to system level is called incremental testing or formal testing.
Example: LCT (life cycle testing).

Sanity Testing
   
     Whether the build released by the development team is stable for complete testing to be applied or not?
This observation is called sanity testing or tester acceptance testing (TAT) or build verification testing (BVT).

Smoke Testing:

     An extra shake-up in sanity testing is called smoke testing. In this phase test engineer will try to find the reason when the build is not working before start working.

Static versus Dynamic Testing:

  A tester conduct a test on application build without running during testing is called static testing.
                 Example:Usability, Alignment, Font, Style …..Etc.
  A tester conduct a test through the execution of our application build is called dynamic testing.
Example:Functional, Performance and Security Testing.

Manual Vs Automation Testing

  • A Test Engineer conducts a test on application build without using any third party testing tool is called Manual Testing.
  • A Test impact indicates test repetition with multiple test data.
                           Example:functionality testing.

  • A Tester conducts a test on application build with the help of a testing tool is called test Automation Testing.
  • A Test criticality indicates that complexity to execute the test manually.
                           Example:load testing.


Re-Testing

    The re-execution of a test on same application build with multiple test data is called re-testing.
                            Ex:multiple


Regression Testing

      The re-execution of selected test on modified build to ensure bug fix work and occurrences of side effects is called regression testing.

Error:A mistake in coding is called Error.
Defect:A test engineer found mismatch due to mistakes in coding during testing is called
Defect or issue.
Bug:A defect accepted by developers to be solved is called Bug.





REFERENCES AND RESOURCES:
1.http://www2.sas.com/proceedings/sugi30/141-30.pdf
2.http://www.softwaretestingclass.com/category/testing-concepts/