Writing easily understandable tests is a cornerstone of the BDD paradigm and also helps build up a living documentation of your system. Anton Angelov has created a series of articles dedicated to Specflow that starts with an introduction tutorial. As DevOps is now a new important approach for rapid software delivery, how do you perform software testing in this context of reduced cycle times.
In his article, Gopinath C H explains how to perform testing in continuous integration and continuous deployment workflows, providing examples based on the Visual Studio and Team Foundation Server tools. In this blog post, Rui Sun and Andre Hamilton explore some Visual Studio capabilities that will make it easier to test and verify Windows 8 applications. Visual Studio has a simulator that reduces your need to have physical devices of every form factor at your disposal for testing.
You can launch your application in the simulator directly from within the Visual Studio through the debugging action. Then you can interact with your application using the mouse or using simulated touch on your development computer with gestures like swipe, pinch to zoom and rotation. Visual Studio also provides a unit test library project for Windows 8 applications written in C , VB.
The new Visual Studio tries to improve the developer unit testing experience, particularly for agile teams. The entire unit testing framework has been made extensible, which will allow you to use testing frameworks such as xUnit. This talk presents the history of unit testing in Visual Studio and then takes you through the product, showing off the new unit testing features. If you try to delete a test configuration that is in use, you are prompted to set it to Inactive instead. To assign configurations to Test Cases, you have a few options.
The first is to go to the Properties page of the plan and change the configuration. This can instantly apply the changes to all Test Cases contained within the plan and any Test Cases you add to the plan at a later date. To make a change here, uncheck the Use Configurations from Parent Test Suite option, and check any additional test configurations you want to include. Changes you make here apply to the individual suite and any suites contained in the currently selected suite. For example, looking at Figure , if you select the Iteration 1 node and change the default configurations, the new set of configurations apply to all Test Suites in Iteration 1.
If, however, you change the default configurations at Test Suite 1 log onto the blog engine the change applies only to this suite. Changing the configuration here is not automatically reflected on the Test tab. To illustrate this, after making one of the previous changes, select the Test tab; notice the same number of tests to be run as there are Test Cases. You see how to change this in a minute. Another option is to assign test configurations at the suite level for existing Test Cases. To do this, right-click the suite in the left pane of the Contents tab, and choose Select Test Configurations for all Tests.
This shows the screen in Figure One option available to you is the Reset Defaults option. If you have previously changed the default configuration at the Suite level and want to apply it to all existing Test Cases, selecting the Reset Defaults button will do this for you. As shown in Figure , pressing this button automatically selects both configurations for all tests listed. After assigning one or more Test Cases to different configurations and applying the changes, you return to the Plan Contents page.
The one apparent difference is the configurations column now has a value greater than 1. This column notes how many configurations are assigned to a given Test Case; you might see the Tester for a Test Case listed as Multiple. You revisit this when assigning testers to Test Cases is discussed. You see the changes when you select the Test tab.
You can execute two more tests than there are Test Cases; these additional tests have different configurations, as shown in Figure An additional option for setting test configurations is to select one or more tests and click the Configurations button. This enables you to set configurations just for the specific tests selected. So far you have seen how to set test configurations for a plan. Options can be set at the Plan, Suite, and Test Case level, and generally they cascade down. The next step is to assign and manage testers in the context of the plan.
As with the test configurations, you can assign testers in a number of ways. The first and most obvious way and certainly the easiest to report on is to simply assign the Test Case work item to a tester. There are numerous scenarios in which the person who writes the Test Case does not also execute it. There are also scenarios in which the Test Case, as previously mentioned, is executed on different configurations, and different testers work those different configurations. To assign a tester to a Test Case, you work at the suite or Test Case level.
- 404 Not Found.
- Inventario de Sombras (Spanish Edition).
- The Autobiography of Andrew Carnegie and The Gospel of Wealth?
- Software testing visual studio tutorials and videos?
- One Night With Death (Making Love To Death).
The screen for both is the same; the only difference is which testers show up. This brings you to the page shown in Figure You can select individual testers for each Test Case and configuration either one at a time or in bulk. To assign testers in bulk, select the Test Cases you want to assign using the Control or Shift keys and change the assignment for any Test Case. This change will be duplicated to all selected Test Cases. Remember that the Plan tab has a distinct list of Test Cases, but because different testers are assigned for different configurations, MTM aggregates all the testers assigned to a Test Case as Multiple.
You can see the individual testers on the Test tab. How do you use it to manage the testing workflow? What are the consequences of managing it in any particular way? How does the usage of it translate into reporting? Before jumping into the planning, take a look at a rough overall software development process.
This process, shown in Figure , is not specific to any methodology. It is logically impossible to present scenarios that cover every situation. Because of that much of what is presented is generalized, but some strong opinions are presented about what should be done regardless of the methodology used.
What is presented here may not apply to your particular situation. There are many situations in which conventional wisdom must be discarded. What should be obvious is that the basic steps you need to take are the same—regardless of whether you work in an agile or waterfall methodology. Someone needs to gather requirements; someone needs to write Test Cases; and someone needs to execute Test Cases. For example, using Test Driven Development is not enough to ensure the application meets the needs of the user, so even in TDD functional testing needs to be performed.
However, the way in which it is performed and the emphasis placed on functional testing can vary widely. So pick and choose those practices that make sense for your organization. Figure presents a basic development process in which the testers come into play—and roughly when they come into play in an ideal model. The three phases of the development lifecycle where testers work are initial design and construction, testing, and maintenance. In an agile methodology, the analysis, design, construction, and testing can be tightly compressed and not visible as distinct phases.
This is an important consideration to determine what works best for you. In Figure testing is not presented as a distinct phase because it should be occurring hand-in-hand with development. During the initial design for those plans created that deal with the analysis and design phase the Test Plans look radically different than after the testing team can actually perform tests. Tests in these phases are created to validate the analysis and design of the application. Tests turn a subjective requirement into an objective understanding of what the customer wants. This is a common practice. You can find more information on Z at http: Specifications written in a formal modeling language follow strict mathematical theory that does not, in general, enable ambiguity.
However, reading Z or other formal languages can be difficult. A well-constructed Test Case may not meet the rigor of a formal modeling language but can provide roughly the same benefits in an easy-to-read form in much less time. A good Test Case is one with little or ideally no ambiguity and provides the same result for every run. The goal of Test Cases in the initial design phase is simple: Objectify and thereby validate the requirements. The following is a relatively simple, often-used example. Take a requirement that states the following: Visitors should comment on a blog post.
This is a straightforward requirement—or is it? Remember that you are now looking at this requirement from the perspective of testability. For a requirement to be testable, it cannot be ambiguous because if it is ambiguous, it is not repeatable. Before examining the details, look at Table , which is a use case that documents this requirement in more detail. It is acceptable to get a requirements statement like the one just given. These are supposed to be high-level statements that provide a container for users to narrow down their requirements.
The details need to be unambiguous. This use case raises a number of questions. First, what is the order of precedence when pulling cookie information or profile information? In other words, what if a user has logged onto the system before and made a comment and thereby had the cookie set and another user who has never made a comment before is using the system?
Does the system clear the information? Does it use the cookie information? What about when a user logs onto the blog engine from the same machine after a nonlogged-on user has made a comment? Which information do you use? These questions seem minor, and this is a small example, but these can lead to questions that, unanswered, can cause bugs. It also makes it difficult for developers to say they got it right. Testers have to ask these questions to create good Test Cases.
Other ambiguous items show up here as well—what information is needed to create a comment? Do I just need the comment, or do I need to provide an e-mail address? What information is actually in the user profile, and just because it is there, do I use it to fill in whatever fields are available? These questions are more important because there is a data model issue here.
- Test Approach.
- Magellans Voyage: A Narrative Account of the First Circumnavigation: v. 1 (Dover Books on Travel, Adventure).
- Test Scribe and the Tools Center.
- Buy Software Testing with Visual Studio - Microsoft Store.
These fields must be saved someplace, so you must know something about them; otherwise, you may end up having to rewrite the data access code to pull data from a different place. This simple Test Case follows the normal path. This Test Case does enable room for ambiguity—what blog engine website? Which post should they click?
What information displays in addition to the comment? However, during the analysis phase you may not have anything concrete to latch onto or need that level of information. The important piece here is that the user now knows exactly what to expect. This is good enough for the analysis phase. Mostly, this will be a choice of how you want to report on these during the analysis and design phase.
However, you should probably opt to leave the Test Cases in the In Design state so that you will almost always have to do minor updates after the functionality is built and ready for testing. This may include adding or removing steps and putting in concrete controls such as Select Your Nationality From the Drop Down List as opposed to the preceding scenario in which the Test Case specified that places were merely provided for you to enter your nationality; now the control type is known.
In general, a Test Case that is Ready is in a final form that can be executed. Because of how flexible the work item system is, it is easy to add additional states, which is another option available to you. In general, adding additional states will not break the reports, but the reports need to be updated to see the new states.
visual studio
However, this does bring up another point: Test Cases and iterations. Iteration 1 is the analysis iteration and as such no testing will be done on this iteration, but Test Cases will be written. It is perfectly acceptable to mark Test Cases in Iteration 1 as Ready when they are completed by the standards of Iteration 1. Then, when you begin Iteration 2, which is the start of the construction iterations, you may want to duplicate the Test Cases and reclassify them into Iteration 2.
This also enables for granular tracking of Test Cases and enables you to say that a Test Case was ready in one iteration but not ready in another. Again, how you do this is up to you and how you want to report on it. The goal of Test Cases in construction is straightforward; they should be repeatable to find bugs before the user does and test the functionality of the application. The first and last items are open for discussion. Exploratory testing is not necessarily repeatable, unless you record it.
The test may not be repeatable because of back-end data or processes, but at least a tester or developer can duplicate the steps taken to find a bug if one is found. The last item can be a bit of a problem. Anyone who has ever done testing can tell you that this is not possible unless this is your quality bar that usually occurs only in life safety applications.
What do you test? It goes back to the second point; you should run those tests first that are likely to be used by the user and therefore the place to find bugs. All the other code is tested if time is available. Will there be exceptions to this? Using this guideline can help catch the majority of the bugs before the users catch them.
As an industry, there tends to be a lack of agreements Service Level Agreements [SLAs] or other agreements relating to the acceptance of software by the customer. This makes things difficult for the development team. Who pays the cost for it? Ideally, the conditions under which the customers will accept or reject the software are documented in a contract. The best basis for this is that an agreed upon set of Test Cases execute correctly. If this were the case, the customers would be saying that these Test Cases adequately demonstrate the features of the system that you are supposed to deliver.
If these Test Cases pass, the system does what they asked you to do, and they can validate that you have delivered that functionality to them. Now this does a couple of things: The customers have to sign off on the Test Cases. Changes to the requirements cause changes to the Test Cases that require customer signoff. The last benefit is that user acceptance testing is well defined.
Sure, the users can do exploratory testing that is, playing with the system to see if it works. But the real meat is the execution of the Test Cases, and this makes acceptance easy. The reason is that the Test Cases should have all been executed, at a minimum, twice: So these Test Cases you create now are of benefit when delivering the software as well. One potential benefit of MTM being separate from Visual Studio is that for users performing UAT, this can be installed, and the users can run their ex-ploratory testing through the Test Runner.
In this way, if the user does find a bug, the development team has a complete record of the steps the user took to arrive at the bug. This does require the end user to have a license for the software. Are SLAs going to be used? After all this, it is sad to say that the answer is probably no, because there will almost always be some last-minute items the customers want that can cause problems somewhere.
Keep a process but be aware of the customer needs. Finding a way to fit both the process and the customer needs together can give you the power to use what has been discussed here. This section covers some common scenarios and how you can handle them from a planning and tracking perspective. Before everyone on the team rushes to write features and write Test Cases, you need a plan for how to manage and track this work. There is a reason for this. What would that time track? Is it tracking the creation of the Test Case or the execution of the Test Case?
It would be hard to say. Another item to consider is projects in which the project manager uses Microsoft Project to track work. This structure solves a number of problems. First, a project manager can assign the task of creating a Test Case to the test team, which means that the activity can be captured in a Microsoft Project WBS.
Understanding the testing tools in Visual Studio 2010
Second, the project manager has the option to schedule the Test Case for creation and for execution separately. When doing it this way, the Assigned To field would be the person creating it in the first case and executing it in the second case. You do not need to use the Assign To Tester functionality unless testing on multiple configurations. This enables the project manager to track the time discretely for each activity; however, you may not want to assign a task to execute a Test Case.
This is quite difficult for a tester to realistically keep track of.
Available on
The task would be associated with the Test Case and not the test run, which makes reporting even more difficult. It provides some additional structure and enables the Test Cases to show up in a tree query as opposed to a directed links query but does not feed any reports. In FDD, software development is done on multiple branches. That is, you may have a branching structure like the one shown in Figure In this type of branching structure, it is generally considered a best practice to perform comprehensive testing on all code in each feature branch before merging it to the main development environment.
How do you keep track of it? The recommended solution is to create one Test Plan per feature branch. Because you can copy suites between Test Plans, this becomes relatively simple. Figure shows the Copy Suites screen. You can either copy the entire suite which includes the root node or you can copy individual suites. It is critical to note that this does not create a copy of the Test Case.
Software Testing with Visual Studio - CodeProject
It simply references the existing Test Cases, which in this situation is exactly what you want—change the Test Case in one place and it changes it in all places. In this way multiple Test Plans can be associated with different code from different branches because each Test Plan can be associated with its own build but the results can all be reported on together.
When you move from iteration to iteration, you need to deal with a number of issues. Some of these include uncompleted Test Cases, and in others the Test Cases were completed but never executed. This depends on how you want to report on them. If you have a Test Case with the area set as Iteration 1 but then you copy the suite that it is part of to another Test Plan, which is testing Iteration 2, you have a problem.
This can significantly skew your reporting depending on how you report on it. What are your options? In the first case, the suite copy is an expedient way to handle the problem. But the recommendation for this is to go one step farther. After you perform a suite copy, update all the Test Cases that were copied to be the same iteration that the new plan is in. To make this clearer, consider the following: You have a plan Analysis that is set for Iteration 1.
All Test Cases in the plan are also set for Iteration 1. The analysis phase is complete, and you move to the next phase in which these Test Cases will be updated. If you plan to do work on these Test Cases, use the suite copy to add them to a new Test Plan called Construction.
After they are copied over, update all the Test Cases so that the iteration is set to Iteration 2 to match the iteration in which they will be worked on. Then continue to work on them as you normally would.
The second option in many ways is more appealing. Creating copies of the Test Cases allows you to preserve the Test Case as it was executed against the code in a given iteration. An example is that Iteration 3 ended in a release to the customer. The team begins work on Iteration 4, which will modify some of the features in Iteration 3. This is an every-day occurrence in agile development but less so in waterfall.
However, between the current release and the next release, those Test Cases may need to be re-executed against production code. If you are actively changing those Test Cases, you need to go back into the Test Case work item type history to get back to the Test Case executed against the current release. In this way it acts almost as a branching mechanism for your Test Cases and enables you to preserve the Test Cases executed against a release. This may be handy for auditing purposes. Just be aware of what can happen in the various scenarios, and think it through before developing your plan.
As previously mentioned you can use configurations as metadata for reporting purposes and to cut down on the number of Test Cases that you need to maintain. But does it always make sense to do this? The answer is no.