Throughout the last weeks, I've been less involved than I should have. I have been focused on finishing other things and now things have started to get back into order. In this document, I wish to expose my vision for our project's quality assurance. I've spoken with most of you about this already, but I wanted to write this down. I'm not dictating anything, feel free to contribute.
- QA presentation
- We need more concrete details about our QA and actually start doing some
- TA audit
- He will be auditing our process so we've got to formalize a few things.
- Daniel's evaluation
- Of course, we need to make him happy
How do we verify we have attained these goals?
The third goal is to be attained by a good development process. Simply doing what we are doing for our project would be a hell of a lot better than what Daniel had before. However, we'll do even better by
- Code documentation (to be covered in Maintainability)
- Developer Guides discussing extensions.
- How do we evaluate this? Daniel can tell us if we're missing stuff he would like.
Usability is hard to evaluate. It's a subjective matter. However, concrete techniques exist to test usability. Our goal is not to make it usable by the general public; we want developers who know a little about UCMs to be able to use the tool easily.
- Grab random students, give them a 5 min intro to UCMs, ask them to replicate a diagram.
Other than that, I feel usability is a lot of "we know what we have to do, give us the time, and pray we have given it enough attention by the end". How can we solve this problem? Here's an idea:
- Each week, UI guy (JP) tests the tool to see what is working, what isn't. JP creates bugs in BugZilla corresponding to the things he wants fixed.
- Should he be the one fixing them? Should the author fix them?
- Some Use Cases have been written down, but there are many more than should be discussed. Then, developers have something to follow when developing and stay away from the "oh yeah, but this is just some prototype coding".
- It's going to be about as long to do it right the first time, so I'd like interactions to be greatly detailed.
- The structure imposed on us makes it so that the typical easy file - open use cases are obviously fulfilled. I'd like to see more focus on diagram editing, which is our main task.
Maintainability is also subjective. Man, why didn't we chose easier objectives?
- Coding Standards.
- I want all our code to be autoformatted in Eclipse using the standard code format.
- I want someone (maybe it will be me) to create Eclipse templates for our code. We've been talking about it for two months.
- JavaDoc. We should use it alot. I for one and someone who has trouble with documentation. It slows down productivity if applied blindly. For the moment, I don't think I will be asking to document ALL methods. If I did that, we'd spend 75% of our time saying "getXYZ returns an XYZ object". Furthermore, JavaDoc like that is pretty useless. I hate writing comments each three lines of code when it is readable without the comments. I won't require you to say that your next line (blah.setXYZ(xyz);) sets XYZ when it is blatantly obvious. Instead of putting effort into writing comments on code that is easy to read, I would like us to focus on keeping our architecture clean.
- What I would like to see is:
- For each class, javadoc is purpose and possibly how one should interact with it. We'll have tons of classes and having a standard header with this info will help alot.
- Each method with particular preconditions should be documented.
- When a method is too long, split out its flow into multiple functions.
- When methods do the same sort of operation, factor out the repetitive code.
- Spend time refactoring, not documenting long methods.
- Methods shouldn't do too much. Look at the code in the framework, a method or class has a limited number of responsibilities. If we start writing 200 line methods, we'll obviously look odd. Jordan will be happy, he tries to keep his classes under 100 lines
- Generally, apply good principles that we've learned in the past. Reserve some time each week to get this done. I might even assign bugs when I don't like the code.
- I think we might want to start using Reflection for some functions in our framework. I've seen in ET's code functions that are a select case on the class type, that create a new class type after that. Using reflection in that method would make it cleaner, make our system easier to extend. Beware though. I think if we are using reflection or dynamic linking and we can't find stuff we are looking for, we should NEVER hide the exception. Doing so helps make it a bitch to debug.
- Code reviews
- I think ET should review all your code to make sure it is well architectured.
- I'll be reviewing everything as well.
- We need to do this periodically. Why not integrate it into our development process? I assign something to you, you do it mark it as fixed, ET/me reviews it and marks it as closed.
- Metrics will help us justify our maintainability.
- We'll need to query BugZilla's MySql DB to see stats like these (and graph them over time):
- Mean time to fix.
- Max time to fix.
- Current open bugs.
- Total bugs fixed.
We also have general quality assurance issues to incorporate to test our code. I want to stay far away from listing keywords in our presentation. I want to discuss what we have to do, and how we think it should be tested, differentiating between manual and automatic testing.
So what are we doing? We're building a GUI for a graphical notation.
For each requirement, we need to decide how we will say the requirement is fulfilled. Our requirements are general. I foresee that we will be creating a few scenarios in order to test all of them. For example, to say that we've fulfilled the component binding scenario, we might have to test:
- moving a component over another one doesn't bind it.
- moving a component into another one binds it.
- moving a component out of the other one unbinds it.
- right-click + choosing unbind in the contextual menu unbinds it.
Each of these implies a "scenario" to be performed either by scripting the command objects that are performed on the model or by using tools that simulate user clicks. In either case, at the end of the process, we compare the model's serialization with something already done. General logic:
- if output file doesn't exist, create it, say that test passed.
- if it exists, compare with it. if different, failed.
- if you modify the model, which changes the output files, simply delete the existing files and manually check to see if everything is okay in the new version. automated tests can then continue.
I don't think this is called unit testing because we are testing many levels of interaction, but we'll still be using JUnit to execute our manual scripts.
What bugs do we expect?
Through my experience at MedTech working on the admission system, I can say the majority of the bugs were not related to functions not doing to what they were supposed to but more to functionality that was added that changed the context in which these functions execute. Working with the underlying business objects is easy and straightforward. So you write code that does such and such, then you realize that you shouldn't do this if such and such global conditions are true. When you code something elsewhere, you might need those global conditions as well so you repeat your code that examines de context. Then, later on, you fix a bug in one of its occurences but leave the other code untouched. I would say 80% of the bugs in the admission system occur because of discrepancies in context management.
I think we need to centralize our behaviour somewhat. We have yet to define the exact behaviour of all elements, so it is hard to do now. But given my experience and given the fact that Daniel wants to be able (in the future) to add scripting capacities, an additional layer above our model would probably be very helpful. We need to research this.
In any case, this layer could support unit testing and would help facilitate our test scenarios.
I wonder what it would be like to have a class to which we supplied Commands which would either execute the command (and place it in the stack and all that) or simply drop it if it is not allowed in our context. This "validation" class could massively use delegation to perform the decision. We could model our behavioural aspects in a separate model. There must be a design pattern for this sort of thing. Going into this kind of idea will either:
- be a great success and will simplify our lives and the future maintenance of the system
- be the most horrible decision we took
So what do we have to do now?
- For each functional requirement, figure out how to decide if the requirement has been fulfilled.
- Define tests to be done in the tool that encapsulate all the needed behaviour.
- These tests will need to have a predetermined model state before and after the test.
- We need to define the test data that will allow our automated tests.
- I think it would be cleaner to have these as scripted commands run as JUnit tests.
- We should aim at passing in high level commands if we can, almost at click level, so that we test the most we can
- each test should verify its output
- we should have tests to verify that illegal behaviour is not allowed in specific contexts
- We need high level scenarios that encompass tons of tests.
- For example, using the tests above, we might be able to generate a full UCM from scratch.
- With time, we should aim at building different diagrams, not only the pizza one.
- We need to start creating bugs (enhancements) in BugZilla for the audit, once we have work to do.
- We have to verify how many requirements (and tests) current run and currently pass for our presentation.
- We need to polish off our build generation so that is ready for demo (are we currently testing anything? what happens if the build fails?)
- We need to write down some aspects of our development process.
- Change management. Create the change request form and infrastructure.
- Our BugZilla workflow has to be defined in order for our future metrics to be valid. For example, how should we compute our mean time to fix? From assigned to closed? From new to closed? Should we have different calculations for bug fixes versus enhancements?
- Periodic code reviews?
- Periodic manual testing?
- We need to create our code templates.
- We need to put online our MilestoneThree presentation framework, using the DHTML tool found by Oli.
- We need to code review ET's code.
- Review the sample meta-model to be used.
Following feedback by certain team members, I won't start assigning tasks (as we can see, many aren't done) or are too general. The above lists what I think we need to do as a team, but I cannot create micro-objectives for these as they are too general. Here's what I see would be a good division. Take what you wish, if you don't feel fit for a certain job, lets discuss it. To me the following are more important in the short term than adding new features to the current project. Giving a good shot of work in these areas will help us in the long run.
- JasonKealey: Process / workflow work. Review current code.
- JeanPhilippeDaigle: Automated builds/testing. Define UI interactions. Test data/test cases.
- EtienneTremblay: Behavioural modeling design patterns. Review meta-model, what works, what doesn't, in your current structure.
- OlivierCliftNoel: Write MySql queries needed for our bugzilla metrics. Put online MilestoneTree. I would really appreciate it if you were more involved.
- JordanMcManus: Coding templates. Review current code.
- We need to meet to discuss an efficient way of defining our test cases, test data and scenarios for all our requirements.
- Study the behavioural aspects of UCMs.
- Create bugs in BugZilla.
I'd like all to participate in defining how we can call our requirements complete, in a automated testable fashion, if possible. There is a lot of work to be done here and we really need to discuss this in person, as a group.
- 06 Mar 2005
We shouldn't have try-catches that consume errors deep in the code leaving a variable null instead of something. You only detect there was a problem in another method invocation that tries to use the null. We should really look into writing bullet proof code on our interfaces but not deep within our structure (program by contract). If something is null, there's something wrong, show it. This will solve lots of headaches when new developers will start working on the project.
- 06 Mar 2005