Showing posts with label Shatakirti. Show all posts
Showing posts with label Shatakirti. Show all posts

Monday, November 19, 2012

Analysis of the Effectiveness of Global Virtual Teams

You can download the paper from here: Analysis of the Effectiveness of Global Virtual Teams.pdf

Analysis of the Effectiveness of Global Virtual Teams in
Software Engineering Projects

Abstract : 

Global Software Development (GSD) projects use virtual teams that are present in different pars of the world. These teams often communicate through telecommunication technologies. And hence there is problem of communication as it is never as effective as face to face communication like in co-located teams. The paper takes one phase of the Software Development process and studies the complex issues associated with the Global Virtual Teams (GVTs). The study analyzes the ease of use of technology, trust between teams and the task structure and how they impact the efficiency, effectiveness and the satisfaction level of the virtual teams.

Motivation :

The GVTs rarely meet in a face to face context, and hence a lot of problems arise that are not present among co-located teams.
There is a lot of debate on weather the idea of Global Virtual Teams has a positive or a negative effect on the project quality. And hence, there is a need to study the issue with the help of a real world experiment.
Also, it is needed to study weather the early stages of the software development process can be carried out at off-shore centers using virtual teams.

Objectives : 

Analyze the factors that significantly affect the quality of the output of the requirements definition phase prepared by the virtual teams.
Examine the effectiveness of the Global Virtual Teams in performing these projects.

Lessons Learnt :

In this experimental study, two teams of students, one from IIM-Lucknow and other from University of West Ontorio were asked to co-ordinate together and gather requirements for a project. The following lessons were learnt from the experiment:


  1. The differences in time, culture and size of the teams, do not have any significant impact on the final deliverables.
  2. The trust between the virtual teams is lower in the beginning of the project, but it eventually increases as time goes by. Trust is one major factor that affects the software engineering process. Increase in trust level has a positive impact on the software engineering process.
  3. Academic and cultural differences do not have any significant impact on the project.
  4. The learning process is better when teams from different backgrounds interact with each other. And therefore, the learning experience is also better.
  5. If the structure of the project is good and the technology becomes easy to use, the effectiveness and the satisfaction level of the teams is increased.
  6. Heterogeneous teams has a positive impact on the project deliverable as opposed to a common perception that heterogeneous teams do not perform well.


Conclusion :

Every organization implementing GSD have to manage the Global Virtual Teams. The study helps the organization in creating and managing their GVTs more effectively.
The analysis is similar to real world scenario of the Global Software Development where heterogeneous teams are spread across the globe. Even though the study was done only for the requirements phase, it is important to know how the teams react in a GSD scenario.

Friday, November 9, 2012

Patterns of Testing in GSD


You can download the original paper from : Patterns of Testing in GSD
You can download the presentation from : Patterns of Testing in GSD

 
   Global Software Development(GSD) is a widely used approach in software development. Nowadays every large company wants to have their sites in a low cost development areas like India and China. This provides them with the benefits such as low labor costs, access to a large workforce, and time zone effeectiveness. Compared to co-loacted software development projects, GSD projects face additional problems like the geographic, temporal, cultural and technological differences.

   The impact of GSD on testing is one fiels that has been researched very less. This paper is an attempt to get closer to the solution. Here the authors have discussed some of the important patterns of testing that are used in a GSD scenario.

Pattern 1: Light weight pre-acceptance test

Problem: How do you prove functioning of the system when you assign complete system test to the offshore test team but the final responsibility of the project lies with you..?

Forces:
  • The on-site PM may not be willing to invest a lot of money on the on-site team.
  • The trust issue is always there. The on site manager may not completely trust the off-site guys..

Solution:
  • Name one On-tester who knows in and out of the system.
  • After tested system is again re-checked by the on-tester.
  • The on-tester performs light weight pre-acceptance test.
  • The on-tester reports bugs to the off-testers.
  • The off-TM takes this input and writes more test cases.
  • Once the pre-acceptance test reaches the end, it can be given back.

Benefits:
  • Light weight, quick quality check of systems.
  • Early feedback of off-TM if off-test is efficient.

Risks
  • When the pre-acceptance test is made can be an issue..


Pattern 2: Extension of day

Problem: How do you set up the testing/bug-fixing process efficiently when you have a time difference between the teams..?

Forces:
  • Testing and bug-fixing are alternating tasks that depend on one another.
  • During bug-fixing questions might occur that might have to be answered by the tester.

Solution:
  • After development the system is handed over to the on-tester.
  • The off-tester executes the test and documents found bugs.
  • When off-developer starts his work the next day, he can have the test results and can work on fixing them.
  • At the end of the day, the off-developer updates the bug fix status.
  • And now, the on-tester can again start testing at the start of his day.

Benefits:
  • A longer day, reduction in duration of test/bug-fixing.

Risks:
  • The tester might not be available.
  • GSD specific characteristics
  • Time zone differences.


Pattern 3 : Tester’s sparring partner

Problem: How do you ensure test coverage when off-testers are very specialized..?

Forces:
  • The off testers can lack creativity.
  • They may view beyond ones own nose.
  • The TM needs to be able to rely on completeness of the test accomplished at the off-site.

Solution:
  • Organize a formal review process.
  • An expert is assigned to take up the role of a sparring partner.
  • This role can be done by someone who knows the specs exactly. Preferably a person who knows both sides of the coin.
  • The sparring partner acts as the mediator between the test team and the developer team.
  • Once the tester gives a list of bugs to the partner, he views the bugs, sees for the completeness and if necessary modify or add his own bugs out of the scope of the tester.
  • Ideally there should be one sparring partner per tester.

Benefits:
  • Reduces risk of low coverage.
  • More confident tester as expert for the system is available for questions.

Risks:
  • Cut back of the role of the sparring partner.
  • Communication between the sparring partner and the tester not efficient because of specialization.
  • GSD specific characteristics
  • Very strict differentiation of different careers such as tester or developer in India.

Using these patterns

The patterns can be linked to each other using some of the relations like 'requires', 'recommends', 'complements' or 'excludes' as shown in the diagram below.

Relations between various patterns

Advantages of knowing these patterns

  1. The gap in literature is reduced, where so far little has been reported on the challenges and solutions for testing in GSD.
  2. With the problem solution summary the user gets an overview.
  3. If the user knows the patterns, he can combine them and use them to set up a tailored solution.

Thursday, November 8, 2012

Importance of Software Testing


Today's software market :
  • is much bigger
  • is more competitive
  • has more users

        It has become a part of our daily routine nowadays. From simple toys to spaceships, from a cell PDAs to servers, we are touched by technology everyday. But building a software is not everything, we need to make sure that it is worth and that the quality is the best. For this purpose we need Software Testing.

       Testing the software can be of utmost importance, sometimes even more than the development itself. Inadequate software testing costs the US alone between $22 and $59 billion annually. Don't believe me..?Check it out yourselves. The history has also shown us some of the big disasters caused by the smallest of the small bugs in a software. Safety critical systems especially need to be handled very sensitively and carefully. In the past, Airlines have crashed due to a bug in the navigation system, Patients have died because of a bug in radiation therapy software, power systems have failed rendering cities without power for days, space missions failed making billions of dollars go waste... And why..? All because of small bugs in the softwares.

     By the above examples it must be clear now that merely developing a software is not enough, testing is also very important.

Program managers often say "Testing is too expensive". But, not testing is even more expensive. Hear what some of the greatest people have to say...


“It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise—this thing gives out and [it is] then that 'Bugs'—as such little faults and difficulties are called—show themselves and months of intense watching, study and labor are requisite. . .” 
– Thomas Edison


“An analyzing process must equally have been performed in order to furnish the Analytical Engine with the necessary operative data; and that herein may also lie a possible source of error. Granted that the actual mechanism is unerring in its processes, the cards may give it wrong orders. ” 
– Ada, Countess Lovelace (notes on Babbage’s Analytical Engine)


"To err is human, but to really foul things up you need a computer." - Paul Elrich



Wednesday, November 7, 2012

Risk-Driven Model For Task Allocation

You can download the original paper from : Risk-Driven Model For Task Allocation
You can download the presentation from : Risk-Driven Model For Task Allocation 

     In the GSD scenario, the stakes are comparatively high and most of the risks or potential benefits are directly depended on the way the task are assigned to the different sites. There is a need to assess the risks involved in order to get the work done effectively. This post is a summary of the paper "A Risk-driven Model for Work Allocation in Global Software Development Projects" by Ansgar Lamersdorf, Jürgen Münch and Alicia Fernández-del Viso Torre.


     There are a lot of risks involved in GSD scenario  And most of them occur because of inefficient task allocation. Take for example the problem of language, culture, time zone, abilities etc etc. And one of root problem for all these problems is the task allocation. If we can address the task allocation problem effectively, if we can find a metric or a model to suggest task allocation possibilities to the decision makers, the risks involved can be reduced by a huge factor.

     The authors have developed a model that does exactly this. They made an Integrated Assignment Model to address this problem. The problem of task allocation is to find the best match between elements of two sets (as shown in the fig below).

  1. A set of tasks that together form the software development project. Ex. a needed role, responsibility for a certain part of the product, a process step etc.
  2. A set of sites (locations) that together form the available resources.


      Now the model has to identify the different task allocation possibilities and then suggest the best assignment. For doing this, of course, it has to consider an organization specific set of influencing factors like time complexity, experience, task coupling, time zone differences etc.). And we also need to consider the impact of these factors on the project goals.

The authors formulated the following goals for their work:

  1. Document the decision process
  2. Suggest work assignments
  3. Evaluate the consequences of different alternatives.

MODEL:

      Based on a division of the project and resources into tasks and sites and a subsequent characterization of the tasks and sites, the model is able to provide a weighted list of assignment suggestions and to evaluate every alternative with respect to the expected project risks. The figure below gives an overview of the model input and output.

The model consists of three main sub models

  1. Casual Model
  2. Stochastic Assignment Model
  3. Risk Identification Model


Casual Model:
The casual model stores the organization-specific, relevant influencing factors and their impact on project goals as shown in the figure. The weights of each relationship is calculated based on the past experiences in other GSD projects.



Stochastic Assignment Model:
The work of this sub-model is to do the following:


  1. The causal model is transformed into Bayesian networks.
  2. Using the characterization of the project and the resources, the impact of every possible assignment on project goals is inferred in the Bayesian networks. The impact is aggregated based on weighted project goals and stored as probabilistic distributions for the cost functions needed in the task assignment algorithm.
  3. In a large number (e.g., 1000) of runs, the cost functions are instantiated based on the probabilistic distributions and the assignment algorithm from distributed systems is executed.
  4. For every run, the returned optimal assignment is stored. All returned assignments are then ordered by their number of appearance (i.e., the number of runs in which the assignments were returned). This is
  5. finally presented as a weighted list of assignment suggestions.


Risk Identification Model:
The risk identification model is able to predict GSD-related risks for a given project and work allocation. Therefore, it can be used for analyzing and comparing the different assignment alternatives suggested by the application of the stochastic assignment model. The model does this by transferring lessons learned into a set of semi-formal logical rules.

Does the model work..?

Yes, the model seems to give reasonable suggestions. The test of its results were compared to those of managers and it was shown that the model in fact works most of the time.

Monday, November 5, 2012

Systematic Task Allocation Evaluation in Distributed Software Development


You can download the original paper from : Systematic Task Allocation Evaluation in Distributed Software Development
You can download the presentation from : Systematic Task Allocation Evaluation in Distributed Software Development

   In the present world, Global Software Development(GSD) has become a very common method of developing a software. Many organizations have in fact established atleast one of their sites in some of the countries like India, China etc. where they can find cheap labor. And hence many major challenges like communication, cultural differences, time shift etc have become inevitable. The task allocation in such cases can become extremely difficult considering all the problems. Here I would like to talk about some of the methods discussed by Jürgen Münch1 and Ansgar Lamersdorf2 in one of their publications where they talked about an approach to allocate tasks in a Distributed Software development. We must follow a systematic approach to evaluate the tasks before allocating them. Particularly, the factors influencing a decision and their relative weight have to be determined individually for every project. Here is a possible way to do it. First we will understand the goals of the problem in hand.
Goal 1: Identify the project-specific influencing factors for a task allocation decision and their impact.
Goal 2: Evaluate the possible task allocation alternatives according to the project specific influencing factors. Here is the short version of the steps discussed in the paper.

Process Steps:
  1. Define Viewpoint: At first, the viewpoint of the decision (i.e., the decision maker) must be identified. This is the person responsible for the decision and the one who has the relevant information about the context
of the allocation decision.

  2. Define Context: The context of the project comprises the characterization of the available sites (and their relations such as time-zone distances), the distribution of work to different tasks, and constraints on the task allocation decision (i.e., already assigned tasks). It thus defines the input for the task allocation decision.

  3. Define Focus: The focus of the evaluation is on the project goals. Now, these goals have to be specified further: Which criteria define project success (e.g., cost, quality)? The different possible assignments will later be evaluated with respect to these criteria. If possible, the measures should be quantifiable. If different measures are named, a strategy for weighting them against each other should also be defined (e.g., if assignment A results in higher cost and higher quality compared to assignment B, which is rated as being suited better with respect to the goals and the given context?).

  4. Define Variation Factors: Variation factors are all those factors that have an allocation-dependent influence on the evaluation criteria. For example, if developer experience differed between sites, then assigning more work to the experienced sites would probably decrease effort. Given that effort is an evaluation criterion, developer experience would therefore be a variation factor (because its impact on effort would be dependent on the question of which tasks are assigned to the experienced or inexperienced sites). We categorize variation factors into (a) characteristics of sites (e.g., cost rate, experience), (b) dependencies between sites (e.g., time-zone differences), (c) characteristics of tasks (e.g., size), (d) dependencies between tasks (e.g., coupling), and (e) task-site dependencies (e.g., the knowledge for performing task X existing at site Y).

  5. Define Baseline: The goal of this process step is to derive a baseline for the success measures. Depending on the overall goal (i.e., establishing distributed development vs. modifying a distributed task assignment) and available knowledge, the baseline can reflect collocated development (all work would be assigned to one site) or an already established assignment (work would be assigned as in previous projects). The baseline may, for instance, be determined by expert estimations, historical project data, or using standard prediction models.

  6. Define Impact of Variation Factors: In this process step, the impact of every variation factor (defined in step 4) on every criterion in the focus (defined in step 3) is evaluated. This can be done with the help of expert estimations or by analyzing past projects. For example, if effort is in the evaluation focus and time-zone difference was defined as a variation factor, the step should answer the question “How does a certain time-zone difference between two sites affect the effort overhead for tasks assigned to these sites?” If possible, this should be done quantitatively.

  7. Assess Variation Factors: For all tasks and sites identified in step 2, the values of the variation factors are now assessed for the project at hand.

  8. Evaluate Assignment Alternative: Finally, every possible assignment can now be evaluated using the results of the previous steps. Depending on whether quality focus and impact of variation factors were described quantitatively or not, the evaluation can now provide predictions or guidelines and hints for every assignment that is of interest.


Of course these steps only say that if you follow them, there is a higher rate of success. You can never decide or measure how much knowledge a person has and classify them as good or bad. Thus, the quantification of the factors are very difficult to determine..