there can readily relate the definition to the dashboard elements.
Fig. 5 - The problem dashboard is defined through a "dashboard" command that allows you, using a very high level language, to put what you want to see on the dashboard, and to allow you to dynamically select the data you're interested in viewing (e.g. Product/Stream/Start Date). The STS Engine automatically provides capabilities such as graphics zoom, interactive charts (click on a bar to show a data display panel for it), object-oriented pop-up menus, and drill-down tables and data display panels.
In this dashboard, the user can select any product and then any development stream, and look at the details such as "priority vs status" for the stream, or look at a specific problem, or perhaps zoom into the problems fixed in a particular month for specific details (i.e. by clicking on a graph bar).
As you can see, the issue isn't generating the dashboard, it's specifying what you want on the dashboard. And if you have enough widget variety, you're all set.
A developer might prefer to see an Update Review dashboard, from which he can select the update and file (within the update) he wants to view, add context around the delta report, or add review comment. The dashboard is defined by:
dashboard ?/changes @user status <= ready or reverse updates/(status title)Update
?,4[@setcount 'mods ?Update']File_Count
?<>25x160![> delta ?File -match ?Match -context ?Context oldnew -dir ?^deltaworkarea ?Update]Delta
?<>4x120[>get -term ?Update -field notes]Notes
?/problems ?Update/2(status title)##Problems
?,/activities ?Update/2(status title)##Activities
?'Update Review Station'
Fig. 6 - A simple Update Review station. This configuration shows a single file at a time. It may trivially be converted to show all of the file deltas for the Update at once, in the scrollable window. A review can be added directly from the review station. You may zoom in to the traceability elements (in this case a problem report) while reviewing.
Notice how each of the components of the Update Review Station dashboard has a corresponding, but simple, line in the definition, just as for the Problem Dashboard. The definition lines may be "coded" manually, or generated through a dashboard specification station. But in any case, it's not difficult to see how to specify the dashboard elements. The fact that some elements are defined by selections earlier in the dashboard are what give the dashboard a dynamic context.
So, for CM you might ask the vendor, "what dashboards does it have?". But in CM+ you'll ask the user, "what dashboard(s) do you need for your roles and tasks". And that's what you want - dependence on the user, not on the vendor. And if they don't quite like it, add a line or two or modify the presentation to suit the user's needs. Hey, this sounds like Agile development. And that's just what it is, but for your CM Tool, after you've acquired it.
More on Customization
Customization is a critical part of next generation ALM. But if customization costs a lot, it won't get done. Or if, once done, it's too hard to change, it can't be widely used and will grow stale. That's why customization has to be easy to do (have I said that already?).
The dashboards of CM+ appear easy to define. The first customization goal for any next generation ALM tool must be to take the definition capability away from the compiler, and away from the developer, and even away from the consultant, and give it to the user. In reality, the customer often can't be bothered customizing, so it may be better to