Pages

Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Saturday, 18 October 2014

How to convert string to Character in Java without using built in functions


Below method: with Built in

   String str="Google";
   char[] arr=str.toCharArray(); // What is the wrong with this way

Below method: without Built in method.
  String str="Google";
   char[] arr=new char[str.length()];
   for (int i=0;i<str.length();i++){
       arr[i]=str.charAt(i);
   }

Friday, 17 October 2014

Equivalence Partitioning

Equivalence partitioning


Equivalence partitioning (EP) is a blackbox testing technique. This technique is very common and mostly used by all the testers informally. Equivalence partitions are also known as equivalence classes.

As the name suggests Equivalence partitioning is to divide or to partition a set of test conditions into sets or groups that can be considered same by the software system.

As you all know that exhaustive testing of the software is not feasible task for complex software’s so by using equivalence partitioning technique we need to test only one condition from each partition because it is assumed that all the conditions in one partition will be treated in the same way by the software. If one condition works fine then all the conditions within that partition will work the same way and tester does not need to test other conditions or in other way if one condition fails in that partition then all other conditions will fail in that partition.


These conditions may not always be true however testers can use better partitions and also test some more conditions within those partitions to confirm that the selection of that partition is fine.

Example: 
Assume that the application accepts an integer in the range 100 to 999
Valid Equivalence Class partition: 100 to 999 inclusive.
Non-valid Equivalence Class partitions: less than 100, more than 999, decimal numbers and alphabets/non-numeric characters.

What is Black Box Testing Technique


Black Box Testing

Definition:

Black Box Testing, also known as Behavioral Testing, is a software testing method in which the internal structure/design/implementation of the item being tested is not known to the tester. These tests can be functional or non-functional, though usually functional.

The technique of testing without having any knowledge of the interior workings of the application is Black Box testing. The tester is oblivious to the system architecture and does not have access to the source code. Typically, when performing a black box test, a tester will interact with the system's user interface by providing inputs and examining outputs without knowing how and where the inputs are worked upon.
AdvantagesDisadvantages
  • Well suited and efficient for large code segments.
  • Code Access not required.
  • Clearly separates user's perspective from the developer's perspective through visibly defined roles.
  • Large numbers of moderately skilled testers can test the application with no knowledge of implementation, programming language or operating systems.
  • Limited Coverage since only a selected number of test scenarios are actually performed.
  • Inefficient testing, due to the fact that the tester only has limited knowledge about an application.
  • Blind Coverage, since the tester cannot target specific code segments or error prone areas.
  • The test cases are difficult to design.

Wednesday, 15 October 2014

Customizing the Software-Testing Process

The following are eight considerations you need to address when customizing the software-testing process:
1.     Determine the test strategy objectives.
2.     Determine the type of development project.
3.     Determine the type of software system.
4.     Determine the project scope.
5.     Identify the software risks.
6.     Determine when testing should occur.
7.     Define the system test plan standard.
8.     Define the unit test plan standard.

Testing Process customization can occur many ways, but ideally the customization process is incorporated into test processes , Primarily test Planning processes.
If not incorporated customization should occur to managers a task
>Adding new Tc’s
>Deleting some tasks currently in test process
>Adding or deleting test tools

>Supplementing skills of assigned tester to assure  the task  in the test process  can be executed properly

What is Test Scheduling?


SCHEDULING



A test schedule includes the testing steps or tasks, the target start and end dates, and responsibilities. It should also describe how the test will be reviewed, tracked, and approved.


Project-task scheduling is an important project planning activity. It involves deciding which work would be taken up when. In order to schedule the project activities, a software project manager needs to do the following:


1. Identify all major work needed to complete the project.

2. Break down a large work component into a number of smaller activities.

3. Determine the dependency among different activities.

4. Establish the most likely estimates for the time durations necessary to complete the activities.

5. Allocate resources to activities.

6. Plan the starting and ending dates for various activities.

7. Determine the critical path. A critical path is the chain of activities that determines the duration of the project.





What is Test Budgeting?



Budgetting


A budget is nothing more than a written estimate of how an organization — or a particular project, department, or business unit — will perform financially. If you can accurately predict your company's performance, you can be certain that resources such as money, people, equipment, manufacturing plants, and the like are deployed appropriately.

The best kind of budget is the one that works. You can choose from three key approaches to developing a budget:

· Top down: Budgets are prepared by top management and imposed on the lower layers of the organization. Top down budgets clearly express the performance goals and expectations of top management, but can be unrealistic because they do not incorporate the input of the very people who implement them.

· Bottom up: Supervisors and middle managers prepare the budgets and then forward them up the chain of command for review and approval. These budgets tend to be more accurate and can have a positive impact on employee morale because employees assume an active role in providing financial input to the budgeting process.

· Zero-based budgeting: Each manager prepares estimates of his or her proposed expenses for a specific period of time as though they were being performed for the first time. In other words, each activity starts from a budget base of zero. By starting from scratch at each budget cycle, managers are required to take a close look at all their expenses and justify them to top management, thereby minimizing waste.

· Expert Judgment: Based on the experience person will plan budget
Sample:





Difference Between test Strategy and Test Planning

Test Strategy:  test strategy is an outline that describes the testing approach of the software development cycle. please read the sentence very carefully, it means Test Strategy document is high level document about the testing approach for the software as a whole.
Test plan:  A test plan  defines the strategy that will be used to verify and ensure that aproduct or system meets its design specifications and other requirements. It more towards functionality of the software under test.
from above definitions it very clear that Test Strategy document define the strategy about approach of the testing, like which tools we will used, what is scope of testing for whole project/product, Testing measurements and metrics , Defect reporting and tracking etc where as Test plan talks about the strategy to test the specific functionality of the software like Features to be tested , Features which no need to test,Suspension criteria, pass fail criteria for the feature and not for the whole software.
Due to small project, many companies try to combine test plan and test strategy, some company creates Master test plan document.

Components of the Test Strategy document

  • Scope and Objectives
  • Business issues
  • Roles and responsibilities
  • Communication and status reporting
  • Test deliverable
  • Industry standards to follow
  • Test automation and tools
  • Testing measurements and metrics
  • Risks and mitigation
  • Defect reporting and tracking
  • Change and configuration management
  • Training plan

Components of the Test Plan document

  • Test Plan id
  • Introduction
  • Test items
  • Features to be tested
  • Features not to be tested
  • Test techniques
  • Testing tasks
  • Suspension criteria
  • Features pass or fail criteria
  • Test environment (Entry criteria, Exit criteria)
  • Test delivarables
  • Staff and training needs
  • Responsibilities
  • Schedule

Basic difference between Test Plan , Test Strategy and Test case
Test Plan: Test plan is a Document, developed by the Test Lead, which contains "What to Test","How to Test", "When to Test", "Who to Test".

Test Strategy: Test Strategy is a Document, developed by the Project manager, which contains what type of technique to follow and which module to test.

Test Scenario: A name given to Test Cases is called Test Scenario. These Test Scenario was deal bythe Test Enggineer.

Test Cases:It is also document andit specifies a Testable condition to validate a functionality. These Test Cases are deal by the Test Enggneer

Order of STLC:

Test Strategy, Test Plan, Test Scenario, Test Cases.

Test Plan

Test Plan

ISTQB Definition:
----------------------------------
Test plan is the project plan for the testing work to be done. It is not a test design specification, a collection of testcases or a set of test procedures; in fact, most of our test plans do not address that level of detail. Many people have different definitions for test plans.

The Test Plan document on the other hand, is derived from the Product Description, Software Requirement Specification SRS, or Use Case Documents.
The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test.

It is not uncommon to have one Master Test Plan which is a common document for the test phases and each test phase have their own Test Plan documents.
There is much debate, as to whether the Test Plan document should also be a static document like the Test Strategy document mentioned above or should it be updated every often to reflect changes according to the direction of the project and activities.
My own personal view is that when a testing phase starts and the Test Manager is “controlling” the activities, the test plan should be updated to reflect any deviation from the original plan. After all, Planning and Control are continuous activities in the formal test process.
·         Test Plan id
·         Introduction
·         Test items
·         Features to be tested
·         Features not to be tested
·         Test techniques
·         Testing tasks
·         Suspension criteria
·         Features pass or fail criteria
·         Test environment (Entry criteria, Exit criteria)
·         Test delivarables
·         Staff and training needs
·         Responsibilities
Schedule

What is Test Statergy?

Test Strategy Definition:

A Test Strategy document is a high level document and normally developed by project manager. This document defines “Software Testing Approach” to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document.

Components of the Test Strategy document
-------------------------------------------------------------------
·         Scope and Objectives
·         Business issues
·         Roles and responsibilities
·         Communication and status reporting
·         Test deliverability
·         Industry standards to follow
·         Test automation and tools
·         Testing measurements and metrices
·         Risks and mitigation
·         Defect reporting and tracking
·         Change and configuration management


Another Definition of is Test Strategy?

A Test strategy is an outline that describes the testing approach of the software development cycle. It is created to inform project managers, testers, and developers about some key issues of the testing process.

What is Test Strategy Document?

The Test Strategy document describes the scope, approach, resources and schedule for the testing activities of the project. This includes defining what will be tested, who will perform testing, how testing will be managed, and the associated risks and contingencies.

The scope of test strategy focuses on the following areas:

    • Scope outlining goals, test processes such as defect management, team responsibilities including Business Analyst, Project Manager, Release Manager Developer and Tester
    • Outline a mechanism for handling and responding to feedback from stakeholders on testing progress and outcomes
    • Provide guidance to stakeholders involved in testing
    When writing a test strategy the following aspects should be considered:
    • Testing objectives
    • Testing guidelines
    • Testing approach i.e. Requirement Driven Testing
    • Roles and responsibilities
    • Levels of testing
    • Test requirements i.e. test artifacts such as functional specifications, acceptance criteria and test scenarios
    • Test deliverable
    • Entry and exit criteria
    • Defect management i.e. what to do when a defect is reported
    • What test reports will be provided
    • Test environment information and migration procedures
    • Test Constraints
    • Test Risk including project and product risks
The major types of test strategies that are commonly found:
  • Analytical: Let us take an example to understand this. The risk-based strategy involves performing a risk analysis using project documents and stakeholder input, then planning, estimating, designing, and prioritizing the tests based on risk. Another analytical test strategy is the requirements-based strategy, where an analysis of the requirements specification forms the basis for planning, estimating and designing tests. Analytical test strategies have in common the use of some formal or informal analytical technique, usually during the requirements and design stages of the project.
  • Model-based: Let us take an example to understand this. You can build mathematical models for loading and response for e commerce servers, and test based on that model. If the behavior of the system under test conforms to that predicted by the model, the system is deemed to be working. Model-based test strategies have in common the creation or selection of some formal or informal model for critical system behaviors, usually during the requirements and design stages of the project.
  • Methodical: Let us take an example to understand this. You might have a checklist that you have put together over the years that suggests the major areas of testing to run or you might follow an industry-standard for software quality, such as ISO 9126, for your outline of major test areas. You then methodically design, implement and execute tests following this outline. Methodical test strategies have in common the adherence to a pre-planned, systematized approach that has been developed in-house, assembled from various concepts developed inhouse and gathered from outside, or adapted significantly from outside ideas and may have an early or late point of involvement for testing.
  • Process – or standard-compliant: Let us take an example to understand this. You might adopt the IEEE 829 standard for your testing, using books such as [Craig, 2002] or [Drabick, 2004] to fill in the methodological gaps. Alternatively, you might adopt one of the agile methodologies such as Extreme Programming. Process- or standard-compliant strategies have in common reliance upon an externally developed approach to testing, often with little – if any – customization and may have an early or late point of involvement for testing.
  • Dynamic: Let us take an example to understand this. You might create a lightweight set of testing guide lines that focus on rapid adaptation or known weaknesses in software. Dynamic strategies, such as exploratory testing, have in common concentrating on finding as many defects as possible during test execution and adapting to the realities of the system under test as it is when delivered, and they typically emphasize the later stages of testing. See, for example, the attack based approach of [Whittaker, 2002] and [Whittaker, 2003] and the exploratory approach of [Kaner et al., 2002].
  • Consultative or directed: Let us take an example to understand this. You might ask the users or developers of the system to tell you what to test or even rely on them to do the testing. Consultative or directed strategies have in common the reliance on a group of non-testers to guide or perform the testing effort and typically emphasize the later stages of testing simply due to the lack of recognition of the value of early testing.
  • Regression-averse: Let us take an example to understand this. You might try to automate all the tests of system functionality so that, whenever anything changes, you can re-run every test to ensure nothing has broken. Regression-averse strategies have in common a set of procedures – usually automated – that allow them to detect regression defects. A regression-averse strategy may involve automating functional tests prior to release of the function, in which case it requires early testing, but sometimes the testing is almost entirely focused on testing functions that already have been released, which is in some sense a form of post release test involvement.
Some of these strategies are more preventive, others more reactive. For example, analytical test strategies involve upfront analysis of the test basis, and tend to identify problems in the test basis prior to test execution. This allows the early – and cheap – removal of defects. That is a strength of preventive approaches.
Dynamic test strategies focus on the test execution period. Such strategies allow the location of defects and defect clusters that might have been hard to anticipate until you have the actual system in front of you. That is a strength of reactive approaches.
Rather than see the choice of strategies, particularly the preventive or reactive strategies, as an either/or situation, we’ll let you in on the worst-kept secret of testing (and many other disciplines): There is no one best way. We suggest that you adopt whatever test approaches make the most sense in your particular situation, and feel free to borrow and blend.
How do you know which strategies to pick or blend for the best chance of success?There are many factors to consider, but let us highlight a few of the most important:
  • Risks: Risk management is very important during testing, so consider the risks and the level of risk. For a well-established application that is evolving slowly, regression is an important risk, so regression-averse strategies make sense. For a new application, a risk analysis may reveal different risks if you pick a risk-based analytical strategy.
  • Skills: Consider which skills your testers possess and lack because strategies must not only be chosen, they must also be executed. . A standard compliant strategy is a smart choice when you lack the time and skills in your team to create your own approach.
  • Objectives: Testing must satisfy the needs and requirements of stakeholders to be successful. If the objective is to find as many defects as possible with a minimal amount of up-front time and effort invested – for example, at a typical independent test lab – then a dynamic strategy makes sense.
  • Regulations: Sometimes you must satisfy not only stakeholders, but also regulators. In this case, you may need to plan a methodical test strategy that satisfies these regulators that you have met all their requirements.
  • Product: Some products like, weapons systems and contract-development software tend to have well-specified requirements. This leads to synergy with a requirements-based analytical strategy.
  • Business: Business considerations and business continuity are often important. If you can use a legacy system as a model for a new system, you can use a model-based strategy.
You must choose testing strategies with an eye towards the factors mentioned earlier, the schedule, budget, and feature constraints of the project and the realities of the organization and its politics.

Test Strategy template:


Saturday, 11 October 2014

Levels of Testing?

There are four levels of software testing: Unit >> Integration >> System >> Acceptance.

1.       Unit Testing is a level of the software testing process where individual units/components of a software/system are tested. The purpose is to validate that each unit of the software performs as designed.
2.       Integration Testing is a level of the software testing process where individual units are combined and tested as a group. The purpose of this level of testing is to expose faults in the interaction between integrated units.
3.       System Testing is a level of the software testing process where a complete, integrated system/software is tested. The purpose of this test is to evaluate the system’s compliance with the specified requirements.

4.       Acceptance Testing is a level of the software testing process where a system is tested for acceptability. The purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether it is acceptable for delivery.

What are the Types of Testing?

Types of testing

Don't get confused with Black box and white box testing, they are testing techniques
and also Testing levels are different


Testing Levels • Unit Testing • Component Testing • Integration Testing • System Testing • Acceptance Testing • Alpha Testing • Beta Testing


Her we go what are types of testing:
---------------------------------------------

Testing Types :

 Functional Testing Non Functional • Installation • Compatibility • Development • Performance • Usability • Security • Sanity • Accessibility • Smoke • Internationalization / Localization • Regression • Destructive • Recovery • Automated • User Acceptance


Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.
Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.
Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.
System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.
End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.
Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.
Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.
Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.
Performance testing – Term often used interchangeably with ‘stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.
Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.
------------

Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.
Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing – Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.
Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.
Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products.
Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.

Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.