Software Testing has become a self-governing and a very important profession over a period. As the software development process is becoming a complex activity day by day, the demand to continuously evolve the software testing practices and keeping them aligned to the needs of software engineering is becoming important as well.
One of the most challenging aspects of software testing is designing good test cases. Since test cases lay a foundation for effective test management, and further for sustainance engineering, it should be treated as a product itself and test professionals should take pride in the quality of the test cases because it is their creation.
The objective of this paper is to introduce and discuss why test cases should be treated as a product. The paper further introduces the practices that have evolved to design good test cases that consistently helped testers to design good test cases, deliver them as a product and meet the demands of test projects that testers undertake.
Software Testing & Test Cases
There are many definitions of software testing, but following definition goes very close to the standpoint of this paper:
“A technical investigation of the product under test conducted to provide stakeholders with quality-related information.”- Cem Kaner and James Bach
As Cem and James explain, technical investigation is an organized and thorough search for information. The information objectives could be many. Such as finding important defects and to get them fixed, helping managers make release decisions, checking interoperability with other products, assessing conformance to specifications, etc. The information objectives depend upon stakeholders’ requirements.
As depicted in the figure below, various information objectives will require specific testing strategies and will yield different tests, different test documentation and different test results.
Figure 1: Representation of information objectives, test types and test cases
If we extend the thought of “information objectives (which could be many)” and “finding important defects (as a result of our thorough research)”, and attempt to find its relationship with test design activity, we may find Robert Binder’s definition of software testing helpful. When I look at Cem and James’s definition of testing, it shows me “what is” and when I look at Robert Binder’s definition of testing, it shows me “why should we”.
Robert Binder mentions in his book, “Testing Object-Oriented Systems”, software testing is an effective process to find defects that could be result of Poor user interface design, Elicitation error, Specification omission, Programming error, Configuration/Integration error, Inefficient algorithm, Inefficient programming, Infeasible problem, etc. These errors may induce defects that may fall in one of these types- Side effects and unanticipated, undesirable feature interaction, inadequate performance, real-time deadline failure, synchronization, deadlock, livelock and even a system crash.
Therefore, the test approach (or some may call it a test strategy) essentially guides us to understand what tests should we design to reveal the defects and the technical information.
Cem and James Bach teach in their BBST course that to get the technical information from a piece of software, one must ask a question to the program and this question is nothing but a test case.
Some other definitions of test case are-
“A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.” (IEEE)
“A test idea is a brief statement of something that should be tested. For example, if you're testing a square root function, one idea for a test would be ‘test a number less than zero’. The idea is to check if the code handles an error case.” (Marick)
Test Case – a Product and its Quality
Above discussion now takes us to the next point in our journey to understand test case as a product and quality of this product. First, let us see what quality, in general, means to us.
Quality is defined as-
Fitness for use (Dr. Joseph M. Juran)
Conformance with requirements (Philip Crosby)
Quality is value to some person (Gerald Weinberg)
Detailed discussion about above definitions is out of scope of this paper but these definitions form a good base for defining software product quality.
I tend to say that software product quality is a multi-dimensional measure of meeting requirements of users at an affordable cost.
If we apply above statement to quality of a test case as a product, we could say that it is, multi-dimensional measure of meeting requirements of information objectives at an affordable efforts and cost.
It is extremely difficult to draw a line through all aspects discussed above and come out with a central idea of quality of test case as a product. The practices that we evolve at our organization address our day-to-day needs of meeting the test project objectives and consistently satisfy our stakeholders. However, in the next section, I have attempted sharing my thoughts about how can we engineer test design activity in such a way that the end product is of high quality.
Test Case Engineering
If we call a program as a function of data and operations on data, we treat a test case as function of carefully chosen set of data for the purpose of operations on it for a given information objective.
Test Case Engineering (TCE) starts by breaking down the software product into the three fundamental issues – the Technologies used, the Business domain (or the problem domain), and the Architecture of the product.
Figure 2: Product under Test
Once TCE identifies the three issues well, it grips the whole product with the help of practices in TCE that are depicted in the model below. The practices that one builds in an organization form the Test Case Engineering as a predictive activity to deliver expected quality of test cases.
Figure 3: TCE Practices
Above model clarifies how TCE is a careful thought process to design test cases that will encompass three distinct issues of any software product. The practices employed can be divided into following groups-
- Test management- leadership
- Knowledge Management
- Test designing practices
- Identifying information objectives
- Choosing test data
- Mapping to
- Development model
- Test types
- Test execution techniques- Manual or Automated testing
- Test methodologies- Functional or non-functional testing, White, Gray or Black-box testing
- Test case quality management processes
- Context-driven Reviews
- Test case management system
- Test case version control
TCE helps test teams achieve a balance between expectations, time in hand and quality of work. It helps them decide how and when to trade-off without compromising the objectives of the testing.
Each of the practices described above needs a self-correction mechanism in place. The discussion about the self-correction mechanism is beyond the scope of this paper.
The systematic approach of TCE can deliver high-quality test cases inherently and consistently if the test organization treats test cases as their product and apply the notion of quality to this very important aspect of the entire test management processes.
Dr. Cem Kaner about ‘Good Test cases’
In the article ‘Good Test Case’, Cem Kaner discusses characteristics of good test cases. (This article can be downloaded from kaner.com).
Cem summarizes characteristics of a ‘good test case’ to begin with-
Designing good test cases is a complex art. The complexity comes from three sources:
- Test cases help us discover information. Different types of tests are more effective for different classes of information.
- Test cases can be “good” in a variety of ways. No test case will be good in all of them.
- People tend to create test cases according to certain testing styles, such as domain testing or risk-based testing. Good domain tests are different from good risk-based tests.
After the above discussion, Cem highlights some of definitions of a test case and ends the discussion with –
In practice, many things are referred to as test cases even though they are far from being fully documented. Brian Marick uses a related term to describe the lightly documented test case, the test idea:
“A test idea is a brief statement of something that should be tested. For example, if you're
testing a square root function, one idea for a test would be ‘test a number less than zero’.
The idea is to check if the code handles an error case.”
Then, Cem present us his definition of a test case-
In my view, a test case is a question that you ask of the program. The point of running the test is to gain information, for example whether the program will pass or fail the test. An important implication of defining a test case as a question is that a test case must be reasonably capable of revealing information.
An information objective, which is an answer of a question- test case- is the purpose of running a test case. According to Cem,
Here are some examples of what are we trying to learn or achieve when we run tests:
- Find defects.
- Maximize defect count.
- Block premature product releases.
- Help managers make ship / no-ship decisions.
- Minimize technical support costs.
- Assess conformance to specification.
- Conform to regulations.
- Minimize safety-related lawsuit risk.
- Find safe scenarios for use of the product (find ways to get it to work, in spite of the defects).
- Assess quality.
- Verify correctness of the product.
- Assure quality.
To keep the discussion about characteristics of a good test case narrow and focused on one of the above information objectives, Cem says,
Let’s narrow our focus to the test group that has two primary objectives:
- Find defects that the rest of the development group will consider relevant (worth reporting) and
- Get these defects fixed.
Even within these objectives, tests can be good in many different ways. For example, we might say that one test is better than another if it is:
- More powerful
- More likely to yield significant (more motivating, more persuasive) results
- More credible
- Representative of events more likely to be encountered by the customer
- Easier to evaluate
- More useful for troubleshooting
- More informative
- Appropriately complex
- More likely to help the tester or the programmer develop insight into some aspect of the product, the customer, or the environment
Cem further discuss about how apart from information objectives, other aspects of test designing come to play a role in characterization of a good test case.
He explains that “Test Styles/Types” such as following dominate the thinking of a testers and that influence the “Test Qualities”.
- Function testing
- Domain testing
- Specification-based testing
- Risk-based testing
- Stress testing
- Regression testing
- User testing
- Scenario testing
- State-model based testing
- High volume automated testing
- Exploratory testing
A test case would be “good” within these test styles because, a ‘scenario tester’ might think that,
If I was a "scenario tester" (a person who defines testing primarily in terms of application of scenario tests), how would I actually test the program? What makes one scenario test better than another? Why types of problems would I tend to miss, what would be difficult for me to find or interpret, and what would be particularly easy?
Cem, in his article has discussed each of the above test style and given specific examples of good test cases and has related it with the characteristics defined above. For example,
Check the program against every claim made in a reference document, such as a design specification, a requirements list, a user interface description, a published model, or a user manual.
These tests are highly significant (motivating) in companies that take their specifications seriously.
Specification-driven tests are often weak, not particularly powerful representatives of the class of tests that could test a given specification item. Some groups that do specification-based testing focus narrowly on what is written in the document. To them, a good set of tests includes an unambiguous and relevant test for each claim made in the spec.
Other groups look further, for problems in the specification. They find that the most informative tests in a well-specified product are often the ones that explore ambiguities in the spec or examine aspects of the product that were not well-specified.
Imagine a way the program could fail and then design one or more tests to check whether the program will actually fail that in way.
A “complete” set of risk-based tests would be based on an exhaustive risk list, a list of every way the program could fail.
A good risk-based test is a powerful representative of the class of tests that address a given risk.
To the extent that the tests tie back to significant failures in the field or well known failures in a competitor’s product, a risk-based failure will be highly credible and highly motivating.
However, many risk-based tests are dismissed as academic (unlikely to occur in real use). Being able to tie the “risk” (potential failure) you test for to a real failure in the field is very valuable, and makes tests more credible.
Risk-based tests tend to carry high information value because you are testing for a problem that you have some reason to believe might actually exist in the product. We learn a lot whether the program passes the test or fails it.
Cem’s concluding note is:
There’s no simple formula or prescription for generating “good” test cases. The space of interesting tests is too complex for this.
There are tests that are good for your purposes, for bringing forth the type of information that you’re seeking.
Many test groups, most of the ones that I’ve seen, stick with a few types of tests. They are primarily scenario testers or primarily domain testers, etc. As they get very good at their preferred style(s) of testing, their tests become, in some ways, excellent. Unfortunately, no style yields tests that are excellent in all of the ways we wish for tests. To achieve the broad range of value from our tests, we have to use a broad range of techniques.Lee Copeland about Good Test Design
Lee Copeland writes in his legendary book, “A Practitioner's Guide to Software Test Design” that,
When I ask my students about the challenges, they face in testing they typically reply:
- Not enough time to test properly
- Too many combinations of inputs to test
- Not enough time to test well
- Difficulty in determining the expected results of each test
- Nonexistent or rapidly changing requirements
- Not enough time to test thoroughly
- No training in testing processes
- No tool support
- Management that either doesn't understand testing or (apparently) doesn't care about quality
- Not enough time
This book does not contain "magic pixie dust" that you can use to create additional time, better requirements, or more enlightened management. It does, however, contain techniques that will make you more efficient and effective in your testing by helping you choose and construct test cases that will find substantially more defects than you have in the past while using fewer resources.
To be most effective and efficient, test cases must be designed, not just slapped together. The word "design" has a number of definitions:
- To conceive or fashion in the mind; invent: design a good reason to attend the STAR testing conference. To formulate a plan for; devise: design a marketing strategy for the new product.
- To plan out in systematic, usually documented form: design a building; design a test case.
- To create or contrive for a particular purpose or effect: a game designed to appeal to all ages.
- To have as a goal or purpose; intend.
- To create or execute in an artistic or highly skilled manner.
Each of these definitions applies to good test case design. Regarding test case design, Roger Pressman wrote:
"The design of tests for software and other engineering products can be as challenging as the initial design of the product itself. Yet ... software engineers often treat testing as an afterthought, developing test cases that 'feel right' but have little assurance of being complete. Recalling the objectives of testing, we must design tests that have the highest likelihood of finding the most errors with a minimum amount of time and effort."
- Order of execution
Inputs are commonly thought of as data entered at a keyboard. While that is a significant source of system input, data can come from other sources—data from interfacing systems, data from interfacing devices, data read from files or databases, the state the system is in when the data arrives, and the environment within which the system executes.
Outputs have this same variety. Often outputs are thought of as just the data displayed on a computer screen. In addition, data can be sent to interfacing systems and to external devices. Data can be written to files or databases. The state or the environment may be modified by the system's execution.
All of these relevant inputs and outputs are important components of a test case. In test case design, determining the expected outputs is the function of an "oracle."
An oracle is any program, process, or data that provides the test designer with the expected result of a test. Beizer lists five types of oracles:
- Kiddie Oracles - Just run the program and see what comes out. If it looks about right, it must be right.
- Regression Test Suites - Run the program and compare the output to the results of the same tests run against a previous version of the program.
- Validated Data - Run the program and compare the results against a standard such as a table, formula, or other accepted definition of valid output.
- Purchased Test Suites - Run the program against a standardized test suite that has been previously created and validated. Programs like compilers, Web browsers, and SQL (Structured Query Language) processors are often tested against such suites.
- Existing Program - Run the program and compare the output to another version of the program.
- Cascading test cases - Test cases may build on each other. For example, the first test case exercises a particular feature of the software and then leaves the system in a state such that the second test case can be executed. In testing a database consider these test cases:
- Create a record
- Read the record
- Update the record
- Read the record
- Delete the record
- Read the deleted record
Each of these tests could be built on the previous tests. The advantage is that each test case is typically smaller and simpler. The disadvantage is that if one test fails, the subsequent tests may be invalid.
- Independent test cases - Each test case is entirely self contained. Tests do not build on each other or require that other tests have been successfully executed. The advantage is that any number of tests can be executed in any order. The disadvantage is that each test tends to be larger and more complex and thus more difficult to design, create, and maintain.
As Lee mentions about this book that, “It does, however, contain techniques that will make you more efficient and effective in your testing by helping you choose and construct test cases that will find substantially more defects than you have in the past while using fewer resources.”
The book takes us to understand various test designing techniques that are explained with number of good examples, which makes the reading interesting and an experience. The book is a ‘must_read’ for all test professionals.
Ross Collard about Good Test case
Apart from the generic discussion that one normally finds in books and articles in test case design, Ross discusses a few practical problems of test teams that affect test design time and again. I have highlighted some of the examples below to keep this reference narrowly focused on only those variations.
In his forthcoming book, Developing Software Test Cases, Ross defines test case as,
“A test case exercises one particular situation or a condition of the system being tested.”
He thinks that the purpose of the test cases is,
“A test case describes what to test and how and provides direction to the person (or to automated test tool), which perform testing. The direction includes how to set up and execute the test case, what behaviors to monitor and how to evaluate the results.”
After a detailed discussion about test case structure- test condition, test procedure, test results and test environment, he says,
“there is no standard or correct way to document the test cases. Multiple projects will use multiple formats, or a project will use multiple formats. Such variation is justified when test cases need different information or it might be just due to lack of coordination in test teams, which is often the case.”
Then Ross brings out a very interesting discussion about the level of details that a test case may have versus the experienced test team and non-experienced test team, who will be executing the test cases. Then he brings out another important aspect of good test design and test cases- collaboration of insiders and outsiders. According to him, the insiders have usually a very good knowledge of the system under test and they can develop better quality tests from this perspective however, they lack detachment from development team and hence they may lose the advantages of this good design by sheer mismatches due to human relationship dynamics and losing the objectivity of the project.
Whereas, though the outsiders have less knowledge of the system, due to their detachment, they may become highly successful in bringing out ‘breath of fresh air’. However, the main weakness of the outsiders is the depth into the system knowledge and he thinks that this is one of the areas, where they have to work hard to develop good quality test cases.
Harry Robinson’s thoughts for Good Test Cases
I have taken following references from Harry’s two papers- “Intelligent Test Automation” and “It’s Different in Test”. Though the papers that he wrote were to address different topic, I found a few very useful tips for test case designing, which I have reproduced below. I have highlighted the important tips in bold letters.
Harry Robinson is Test Architect for Microsoft's Enterprise Management Division. In addition to his day job, he teaches classes on advanced software test automation and is a driving force behind Microsoft's model-based testing initiative. He has been at Microsoft for five years and has a bachelor's and master's in electrical engineering. You can reach him at [email protected].
“Intelligent Test Automation”
Let’s look at an example of creating and using a behavioral model to test a software application.
Hands-on testing is a good way to start the test automation process. I call this phase “exploratory modeling” because it combines exploratory testing with the discovery of a model that can later be used to generate tests. As you begin to understand the behavior of each action, you can create rules that will help you model and test the application.
This is the essence of model-based testing: To describe the behavior you expect in a way that can be used to generate tests. Ask yourself the following two questions for every action you are going to test:
- When is this action possible?
- What is the outcome when this action is executed?
For instance, suppose you have been asked to test the behavior of files in a Windows folder. In particular, you are going to test the Create, Delete, and Invert Selection actions.
“It’s Different in Test”
There's a saying that "If you all think alike, some of you are unnecessary." If that is true anywhere, it is certainly true among testers. There may be some justification for having a team of developers think in lockstep, but for testers it would be a catastrophe for everyone to think alike. One of the chief benefits testers bring to any group is a different viewpoint, and they should be encouraged to disagree with each other. I get a kick out of watching serious testers talk; it is usually at elevated volume levels.
Actively seek out a good mix of skills and backgrounds when filling out your test team. Do you have coders and non-coders on your team? Each will see different types of defects. How about both genders? Multilingual? Culturally diverse? Differently-abled? One of my favorite true stories is about the Tablet PC team demonstrating its handwriting recognition function to Bill Gates. The team was very excited and confident, but they had overlooked a big factor. The Tablet PC team was almost all right-handed, but Gates is a lefty. Guess what? The handwriting recognition didn't work well for him. Oops.
Hung Nguyen about Good Test Cases
Hung Nguyen is the co-author of the legendary book “Test Computer Software”. He is now the CEO, President and Founder, LogiGear Corporation that offers variety of services in software testing arena.
Hung has written a paper- Tests Built-to-Last: Software Tests as an Asset - about good test cases and insists that test cases should be treated as an asset. His thought is analogous to our thought of treating test cases as product.
"Software Tests as an Asset" Philosophy
Software testing, and the software tests that are created, need to be looked upon as an easily maintained, renewable and reusable resource that inherently captures what an organization knows about how the software under test is supposed to work, and provides visibility into the quality of the software under test. Some of the most fundamental precepts of the "Software Tests as an Asset" philosophy are:
- Good test design must be the focus of software testing, with very little time actually spent on automation of the tests.
- Tests should be inherently automation-ready so that a software test case can be automated with very little effort.
- Tests to be automated should follow the "5% rules" developed by LogiGear CTO Hans Buwalda: No more than 5% of tests should be executed manually; No more than 5% of testing efforts should be expended automating the tests.
- Software tests, whether they are manual or automated, should be optimized for visibility, reusability, scalability, and maintainability.
Clearly, this is fundamentally different from the more "traditional" approaches to software testing. Only by adopting such a dramatic shift in philosophy will an organization be able to significantly improve the speed and reliability of testing, while reducing costs and improving software quality. By adopting such a philosophy, an organization will have software tests that are built to last. The tests will be reusable as is, or with small amounts of maintenance, to test future generations of the software under test.
There will be tests that will not be subject to the "Software Tests as an Asset" philosophy. For example, tests that are simply focused on defect finding that you know you will never use again once the software is released would not benefit from the "Software Tests as an Asset" philosophy.
Alan Richardson about a Test Design Technique
While working as a developer, coding software testing tools, Alan Richardson’s interest switched from programming to software testing. Since 1993, software testing has been Alan’s professional specialism and he has worked at all levels of the testing hierarchy; test execution and design, test management, strategy and methodology. He is currently an independent test consultant and helps his clients with every aspect of software testing.
Alan wrote a paper about “Practical Experiences in Graph-Based Testing” and this paper has many useful tips about making the test cases effective and reduce efforts on developing the tests. I am going to focus quickly on some of those useful tips in his paper.
“One of the main reasons for writing this paper is that, and I generalise wildly here, I don’t think that structural graph models are used enough in testing. I certainly don’t encounter many testers using graphs, which is a shame as graphs are well covered in the testing literature, and they are enormously helpful. I could suppose that this might be due to testing tool support, but I don’t really believe that. Perhaps other testers just haven’t been through the same testing experiences as I have? So one thing I’m going to do in this paper is present an account of my use of graphs in testing.”
“The use of graphs in testing is not a panacea, but it is useful, it is a technique that is easy to learn and has numerous benefits. Sometimes I don’t apply it, but you have to learn a technique so that you know when to apply it and when to discount it.
Graphs will be presented as useful models for:
- deriving test conditions
- understanding systems
- communicating the tester’s view of the system
- automating the production of test scripts
- assessing a variety of coverage measures
- visualising and reporting coverage measures”
Alan then takes us to understand how the graphs can be used to negotiate with critical situations and achieve higher satisfactory results with the help of the graphs
Communication Graphs leading to test development
The project was being conducted in a structured testing and development environment, but it was not going well. The development specifications were ambiguous, being produced late, delivered late to the test team, and they were wrong.
Writing test scripts was taking too long and the constant change resulted in too much rework. As a mitigating strategy I produced a number of graphs which I could use to communicate my understanding to the development team. Normally I didn’t use my graphs in this way, but it seemed fairly sensible that if I was using them to understand the system then I could use the graphs to communicate my understanding of the system. The graphs went through quite a few iterations until they were understood by the development team and everyone was satisfied that they represented what the testers could do to the system. The development team also made a few changes to the system as a result of reviewing the graphs as they identified issues with the system or specification. And where the graphs were not complete, we supplemented them with some high level test case descriptions. An unexpected benefit from all this communication was that the development team reported it was the first time they actually had proper visibility into the test process. They had always been too busy to read the test documentation before or plow through dozens of test scripts to see what the testers would be doing. As the project was running to tight timescales, and we still had to plan the testing, the scripts were documented at a high level of abstraction; with an overall aim, the path through the graph to be taken, the test data to be used, and the various test conditions that would be covered.
Alan summarizes variety of benefits of using graphs for testing-
- Rather than have people review hundreds of scripts, which they never seem to manage to get around to doing or doing effectively, they can instead review a handful of diagrams.
- Graphs communicate my understanding and help other people understand the systems better.
- The cliché is that a picture is worth a thousand words. My experience tells me that a graph can summarise several pages of requirements and development specifications.
- Sometimes all that has to be done to prevent defects is draw a graph, show it to the developer and ask a few questions.
- Sometimes the graphs will make it into the development documentation for future readers.
- Show your graphs to other people, don’t keep them to yourself, they are too valuable for that.
- Remember, there is more to a graph than just a picture of a graph. The picture is a visualisation of an underlying representation, and this is what you are communicating, you use the picture as a way of communicating it.
- Generally when logic or interaction is hard to comprehend in textual form, I draw a graph. And when I am drawing graphs for understanding I use my favourite tool; Paper and Pen.
- To support script definition, extra step description information is added to the nodes and edges, to tell the tester what to do when traversing a path through the graph.
- Don’t try to model everything in a single graph, use multiple graphs.
- You don’t have to model everything in a graph, you do have other test techniques too.
- You don’t need to get too detailed, keep your graphs manageable.
- If you have a set of test scripts which are similar, consider reverse engineering a model of the scripts as a graph.
- Graphs can evolve, and become more detailed, as the project matures.
- Test Design is different from Test Script design. A test is something that you want to check. A script tells you how to check it in order to determine if a particular test can be passed. When viewing the test process through the eyes of the Script Meta Model, tests are justifications for traversing a path.
- In a structured test process graphs can be used to provide a justification for some of the scope for testing, in session-based testing, a graph can provide a justification for some of the test charters.
- The standard way of deriving tests from graphs is by covering the paths through the graph [Beizer95][Binder00][Beizer90]). Despite the utopian ideals of the Script Meta Model, we will not get all our tests from coverage. The graphs we produce are usually not detailed enough.
- We get extra tests from the graph by examining it and identifying the behaviour that is not modelled.
- Graphs which are modelled too deeply too quickly can become unwieldy and hard to maintain. There has to be a trade off between understanding, communication, and derivation and this will depend on why you drew the graph, the position in the lifecycle, and probably the toolset that you are using to model the graphs.
Test Script Design
- A test script and a test design are different. All test scripts could be automatically generated from a very detailed model, all test designs can not.
- In essence we derive paths from the graph. Paths can be described as sequences of edges. Because every edge has a start node and an end node, we only need to use edges.
- Paths are generated from graphs by the application of strategies to the graph: e.g. node coverage, branch coverage, predicate coverage, cover all loops twice
- When we are covering the paths we are not covering the tests. The script meta model shows us that paths are independent from tests. It is possible to build scripts from paths directly but these scripts do not have the contextual aims that the condition model provides, nor does it exercise the data adequately.
- A Path is a script, but it is an uninstantiated script. To become a test script, we need to know the data used and the conditions covered.
- When I extend the detail in graphs to make them more suitable for script generation, I try to avoid confusing the graph. So I add any important linking edges by adding them as dotted lines, and making start and end nodes different colours. These extra visualisation attributes are important for retaining the communication benefits.
- Give each graph, node and each edge a unique ID that you can use in your path edge sequences and in your cross referencing