Archive

Archive for July, 2009

Send emails with out SMTP server

July 25, 2009 1 comment

In this world of internet I doubt there will be an application with out the requirement of sending mails. To implement and test this common requirement developers need access to the SMTP server. Access to the SMTP server may not be possible in some secured banks. Else we will have to apply for SMTP access and this will be given for a very limited period of time. After the given period the permission to SMTP will be revoked and we will have to reapply.

This will be a problem in testing. when a developer is doing unit testing for his module that sends mails SMTP access failure may lead to a wrong conclusion else he will need to test if SMTP access is working fine. This is really a pain.

I didnt know that an elegant solution exists for this common problem. I saw this solution here.

<system.net>
<mailSettings>
<smtp deliveryMethod=”SpecifiedPickupDirectory”>
<specifiedPickupDirectory pickupDirectoryLocation=”c:\newemail” />
</smtp>
</mailSettings>
</system.net>

The above change in web.config will write mail to the given directory (newmail)  provided the directory has write access. Also note that this directory must exist.

C# Code:

using System.Net.Mail;

MailMessage  mail = new MailMessage();
mail.To.Add( new MailAddress(“<somemailaddress>”));
mail.From = new MailAddress(“<somemailaddress>”);
mail.Subject = “Some Subject”;
mail.Body = “Test mail body”;
SmtpClient c = new SmtpClient();
c.Host = “localhost”;
c.Send(mail);

Hope this helps!

Categories: Uncategorized

Loopback check

July 24, 2009 2 comments

This was a big problem for us and we didn’t notice this behavior for a long time.

The problem is as follows:
You receive error 401.1 when you browse a Web site that uses Integrated Authentication and is hosted on IIS 5.1 or IIS 6

We use a third party component to generate PDF and this component runs with in the web application and it has to access the same application (loopback) to access the aspx pages to get the content and generate the PDF. This worked in the development machine. The developer has used “localhost” to access his server. It worked fine in our development environment too, because we have deployed our application as a virtual directory under default website. So when developer used http://localhost/ there were no errors. Everything was fine.

But things are different in integration and production environment. That environment is maintained by another team in a different country. The setup is different there. In Integration the application is deployed as a website and there are other sites hosted in the same server. Every site has default port as 80 and the request is resolved by using “custom host headers”. There fore http://localhost/ would not work.

After understanding this requirement, we changed the localhost to FQDN (in our case custom host header) hoping it would work. When we tested this in our development environment we started having problems. Authentication failed. We were checking here and there. No clues. All of a sudden accidently when we were accessing this server from different machine using FQDN there was no problem. Only then it occurred to us that FQDN fails only when it is accessed from the same machine. Bingo! After that a little bit of Googling made life easier.

To put it correctly: When you have windows server 2003 and use windows integrated authentication and use FQDN (or custom host header) to access the local website in IIS 6.0 you will get this problem. In other words you will only receive this error when you directly access the site from the local server. This might be a rare scenario but it will happen in some cases as I mentioned above.
1. You will be asked for username and password when you try to access the website using FQDN.
2. No matter how many times you enter username and password authentication fails.

It is a symptom of change from Microsoft 2003 Service Pack 1.
This issue occurs if you install Microsoft Windows XP Service Pack 2 (SP2) or Microsoft Windows Server 2003 Service Pack 1 (SP1). Windows XP SP2 and Windows Server 2003 SP1 include a loopback check security feature that is designed to help prevent reflection attacks on your computer. Therefore, authentication fails if the FQDN or the custom host header that you use does not match the local computer name.” Via

There is a work around for this problem. Fortunately Microsoft provides a way to disable the loopback check as given in the above link. We always follow the step 2. Note that the step2 needs the server to be restarted.

Method 2: Disable the loopback check
Follow these steps:
1. Click Start, click Run, type regedit, and then click OK.
2. In Registry Editor, locate and then click the following registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa
3. Right-click Lsa, point to New, and then click DWORD Value.
4. Type DisableLoopbackCheck, and then press ENTER.
5. Right-click DisableLoopbackCheck, and then click Modify.
6. In the Value data box, type 1, and then click OK.
7. Quit Registry Editor, and then restart your computer

We learnt a very important lesson after facing this problem. It is important to have same kind of environmental setup in all stages. If only we had same environment in all the stages we could have spotted the problem earlier during development stage itself. And all these headaches during deployment could have avoided.

Hope this helps.

A Cautionary Tale : How projects fail!

July 4, 2009 4 comments

The following story is taken from Acceptance Test Engineering – Volume 1, Beta2 Release 29

Following is the story a project where the person in charge of defining and accepting a product fails to rise to the challenge. It describes what goes wrong and why. It also provides an alternate outcome of how the project could have worked out if proper acceptance practices had been used.

Bob Kelly is a mid-level manager in the marketing department at Acme Products. He is in charge of the Mid Sized Markets product group and is the product manager of its core product, the XCelRator. He’s just finished a bunch of market research and has come up with a great idea for a new module for the product that he believes will double the revenue of the product. The Product Development division has a team that is just winding down its current project and Bob is keen to get the team started on building the new module. He calls a meeting and runs through the PowerPoint slide deck he’s been using when talking with focus groups and potential clients. He concludes the presentation by laying out the key delivery dates which must be met to allow the company to showcase the product at the annual trade show. He asks the project manager to draw up the project plan and get the team working on the new module.

Dev Manager meets with his team to collect some information and then he defines the project plan. Based on the delivery date he defines intermediate milestones for Requirements Complete, Design Complete, Code Complete and Test Complete.

Team starts writing requirements documents. At Req’ts Complete, dev manager declares requirements complete on time. The dev team knows full well that some areas of the requirements are still too vague to implement. They will have to get clarification on the details as they do the design. Hopefully the product manager will be more available to answer questions about the requirements as his lack of availability is at least partially to blame for the requirements still being vague.

At Design Complete, ditto. Dev team knows it hasn’t designed some parts of the functionality except at a very cursory level. They will have to fill in the details as they code. Meanwhile, the product manager thinks everything is proceeding according to plan and that he’ll have a great product to demo at the trade show. He starts thinking about how he’ll spend the bonus he’s sure to get for completing this project on time. He calls his architect and asks her to start drawing up plans for the extension to his house, complete with indoor/outdoor swimming pool.

At code complete the team requests an extra month to finish coding parts of the functionality they haven’t had time to do yet. The Test Manager asks what parts of the software are complete so that testers can start testing and the team responds “None, we are each working on a different part of the system and it will all be finished at about the same time.”

To ensure the product is ready for the trade show, the product manager asks the test manager to make up the schedule by compressing the duration of the test phase. He reduces the number of planned test cycles to reduce the elapsed time based on assurances that the code will be in great shape due to the extra time the dev team is being given. The product manager still believes the product can be ready for the trade show but nonetheless he asks the architect to scale back the extension to his house by removing the enclosure for the pool. “I’ll still be able to use it most of the year and I can always add the enclosure with my release 2 bonus.”

As the new Code Complete deadline approaches, the team asks for another month. Product owner reluctantly gives them 2 weeks. “Two weeks later will make it tight for the trade show but we can use the beta so it won’t affect my bonus too badly.” he hopes.

The dev team finally delivers the code two months late. The test team starts testing but finds significant problems that prevent completion of the first pass of the test cases. Testing halts after less than a week. Dev team takes a month to fix all the major show-stopper bugs before handing the code back to test. Test team makes it through the full test cycle this time but finds several hundred defects. Development starts fixing the bugs as they are found.

Test team finally accepts a new build with the Sev 1 and 2 bug fixes. Almost immediately they find several new Sev 1 regression bugs. Development disagrees that some of the bugs are even valid requirements. “The requirements never said anything about …” and “Why would anyone ever do that?” they exclaim. The product manager has to be brought in to settle the dispute. He agrees that some of the bugs aren’t real requirements but most are valid scenarios that he never considered but which need to be supported. Test tells the PO that there is no way in … that the original schedule can be met.

After 4 test fix cycles that took 50% longer than the original schedule (let alone “making up the schedule”), most of the Sev 1 and 2 bugs appear to have been fixed. There is still several hundred Sev 3 and 4 bugs and Test has stopped bothering to log the Sev 5’s (poorly worded messages, field labels or button names, etc.). Test says the product is not ready to ship and will require at least 2 more test&fix cycles before they will agree to release it.

The product is now several months late and won’t be ready for the trade show even in alpha release. “We can still show the box and do some very controlled demo’s.” the product manager assures his boss. Bob is seeing his bonus diminish with each week of reduced sales in this fiscal year. He decides to ship the product, overruling the Test department. He revises the sales forecasts based on the late launch caused by the “underperformance of the development and test teams.”

The manager of the operations department hears about this too late and comes storming into Bob’s office. “I hear you are planning to release the new module with 300 known Severity 3 defects. Do you have any idea what this will do to our support costs?” A counterproductive argument ensues because there is no turning back at this point; the announcements have been made and to change plans now would be hugely embarrassing to the company. “I sure hope this works out OK.” thinks Bob to himself; at this point he doesn’t feel like he’s in charge of his own destiny.

Bob’s marketing machine has been in high gear for quite a while and has generated quite a bit of pent up demand, especially since some users were hoping to be using the product months ago. Users try using the new product in droves. They run into all manner of problems and call the Help line which is overwhelmed by the call volumes. Many users get busy signals; the lucky ones wait listening to recorded announcements for long periods of time. The help desk has to bring on extra staff who need to be trained very hastily and therefore cannot provide very good service to the customers. Some customers give up trying to use the product because of long waits or poor support.

Many of the user problems are related to usability issues that were not detected during the rushed testing because it was so focused on the Sev 1 & 2 bugs (and the usability stuff was usually rate 3 or below, many of which weren’t even logged due to the focus on the 1’s and 2’s.) At peak usage times the system slows to a crawl; the development team is called in to figure out why it is so slow distracting them from working on some of the improvements identified during testing. Users have trouble importing data from prior versions of the software or from competitors’ products.

A large percentage of the users abandon the product after the free trial is over; the conversion rate is less than half of the projected rate. Revenues are running at 40% of the revised projections and less than 20% of the original projections. Support costs are 50% over original projections due to the hiring binge in the user support centre.

The capital cost is 35% over the original budget and has eaten into the planned budget for a second module with additional must-have functionality. Instead, the 2nd release has to focus on improving the quality. The new module will have to wait until 2nd half of next year. The product manager revises the revenue projections yet again and calls the contractor to cancel the addition to his house. Shortly after sending in his monthly status report his phone rings. It is the VP, his boss, “requesting” he come to his office immediately…

What Went Wrong
1. Overcommitted functionality at product manager’s insistence. (Sometimes the dev team will over commit out of optimism but in this case the product manager did it to them.)
2. Waterfall process is inherently opaque from a progress perspective. The first milestone that isn’t easy to fudge is Code Complete. The first realistic assessment of progress is well into the test cycle.
3. Product manager wasn’t available to clarify vague and missing requirements. Testers were not involved soon enough to identify the test scenarios that would have highlighted the missing requirements. But no one could prove the requirements were incomplete so RC was declared on time.
4. Dev team couldn’t prove design wasn’t done (because it is a matter of opinion as to how detailed the design needs to be) so Design Complete was declared on time.

5. Dev team cut corners to make the new (late) Code Complete deadline. The code was written but much of it wasn’t properly unit tested. They knew this and would have told anyone who asked but no one wanted to hear the answer.
6. The quality was awful when delivered to Test. So it had to be redone (we never have time to do it right but we always make time to do it over!)
7. Test was asked to recover the schedule (typical!) but testing took longer because the poor quality code required more test&fix cycles to get it into shape.
8. No clear definition of “done” so decision is made on the fly when emotions get in the way of clear thinking. The product manager let his attachement to his commitments override any sensibility about quality.
9. The operations acceptance criteria were never solicited and by the time they were known it was too late to address them.
10. Waterfall process hid true progress (velocity) until it was too late to recover. There was no way to scale back functionality by the time it became undeniable that it won’t all be done on time. There was no way to reduce scope to fit the timeline because the waterfall-style approach to the project plan (RC, DC, CC, TC milestones) caused all features to be at roughly the same stage of development. Therefore cutting any scope would result in a large waste of effort and very little savings of elapsed time.
11. Development was rushed in fixing the Sev 1 problems so that testing could get through at least one full test cycle. This caused them to make mistakes and introduce regression bugs. It took several test&fix cycles just to fix the original Sev 1&2’s and the resulting regression bugs. In the meantime the Sev 3’s piled up and up and up. This resulted in several more test&fix cycles to fix and even then more than half were still outstanding.
12. Lack of planning for the usage phase of the project resulted in a poor customer support experience which exacerbated the poor product quality.

How it Could Have Gone
The product manager comes to Dev team with requirements that are representative of the clients’ expectations..

Dev team estimates requirements as being 50% over teams capability based on demonstrated development velocity. The product manager is not happy about this. Dev Team proposes incremental development & acceptance testing. Instead of 4 waterfall milestones they suggest 4 incremental (internal) releases of functionality where each increment can be tested properly.

The product manager selects first increment of functionality to develop. He defines the user model consisting of user personas and tasks. Dev team whips up some sketches or paper prototypes and helps product manager run some Wizard of Oz tests that reveal some usability issues. The product manager adjusts the requirements and dev team adjusts the UI design. The product manager works with dev team and the testers to define the acceptance tests. The dev team automates the tests so they can be run on demand. They also break down the features into user stories that each take just a few days to design and test.

Team designs software and writes code using test-driven development. All code is properly unit-tested as it is written. All previously defined automated tests are rerun several times a day to make sure no regression bugs are introduced as the new feature is implemented. As each feature is finished, as defined by the feature-level “done-done” checklist , the developer demos to product manager and tester who can point out any obviously missing functionality that needs to be fixed before the software is considered “ready for acceptance testing”.

As part of incremental acceptance testing they do identify a few new usage scenarios that were not part
of the requirements and provide these to the product manager as suggestions for potential inclusion in the subsequent increments. The product manager adjusts the content of the next increment by including a few of the more critical items and removing an equivalent amount of functionality. He also contacts the operations manager to validate some of the operational usage scenarios identified by the dev team and testers. The operations manager suggests a few additional operational user stories which the product manager adds to the feature backlog for the next increment.

At the end of first increment of functionality (which took 5 iterations to develop, one more than expected) dev team runs the para-functional tests to verify the software performs up to expectations even with 110% of the rated numbers of users. The first test results indicate it can only handle roughly 50% of the expected users and gets even slower as the database starts to accumulate records. They add some work items to the schedule for the second increment to address these issues and warn testing about the limitations to avoid their wasting time stumbling onto them. Testing finds only a few minor bugs during execution of the functional test scripts (the major stuff was all caught during incremental acceptance testing.) They move on to doing some exploratory testing using soap operas and scenarios as their test session charters. These tests identify several potential scenarios the product manager never thought of; he adds them to the feature backlog.

The product manager arranges to do some usability testing of the first increment with some friendly users based on some of the usage scenarios identified in the user model. The results of the testing identify several enhancements that would improve the user satisfaction. The product manager adds these to the things to do in the next increment of functionality.

The product manager calculates that demonstrated development velocity is 25% less than original estimates. Based on this he adjusts his expectations for the functionality to be delivered in the release by removing some of the less critical features and “thinning” some of the critical features by removing some ice-to-have glitz. “Better to deliver good quality, on time than to try to cram in extra functionality and risk everything” he thinks to himself.

The development team delivers the 2nd increment of functionality with similar results as the first. The work they did to improve performance results in the performance tests passing with flying colors with acceptable response times at 120% of rated capacity and no degradation as the database fills up with transactional data. They add some tests for penetration testing and schedule a security review with the security department. The product manager makes some adjustments to the functionality planned for the 3rd increment of functionality. He includes some functionality to address operational requirements such as migrating data from earlier versions of the software and importing data from competitor’s products; he wants to make it real easy for users to adopt his product. He’s happy with how the project is progressing and confident that they will be able to deliver excellent quality and on schedule.

In the third increment the development team delivers 20% more functionality than originally planned. The product manager had to scramble to provide the acceptance tests for the extra features brought forward from the fourth increment. Based on this, the product manager is able to plan for some extra functionality for the fourth increment. He considers reviving some of the functionality that he’d cut after the first increment but decides it really wasn’t that useful; a much better use of the dev teams efforts would be some of the usability enhancements suggested by the last round of usability testing. He also adds functionality to make it easy to upgrade to the next (yet unplanned) release without having to take down the server. That will help reduce the operational costs of the software. “Yes, this is going to a great roduct!” he says to himself.

As the development team is working on Increment 4, the product manager discusses the Acceptance Test Phase of the project. “We had originally planned 3 full test&fix cycles each of 2 weeks duration with a week for fixes in between for a total of 8 weeks of testing.” recounts the Test Manager. “But based on the results of testing Increments 1, 2 and 3 I’m predicting that we’ll only need one full test cycle of 2 weeks plus a 1 week mini-cycle of regression testing for any fixes that need to be done (and I’m not expecting many.) The automated regression testing the dev team is doing as part of readiness assessment has been preventing the introduction of many regression bugs and the automated story tests we’ve been co-authoring with you and the dev team has prevented misunderstandings of the requirements. This is the most confident I’ve ever felt about a product at this point in the project!”

***