Methodological Questions Suggested by the Scenarios

 

In this section we present a set of questions probe the capabilities and limitations of various methodologies. This set of questions is intended to be general enough to cover any methodology regardless how they tackle or name each of the traditional software engineering phases. As we do not want to bias towards or against any methodology, these questions are not grouped by any particular aspect in order to give freedom to anyone using the exemplar to answer the questions in ways best suited to each methodology. As this exemplar is intended to be an open-ended real-world example, we intend to keep improving this set of questions over time with input from the community.

 Detailed Questions, During development

 
QA1          Pro-activeness- Reasoning about unexplained instances of hypoglycemia during the past few months and evaluating them towards the pattern of insulin dosing and the recorded periods of exercise leads the GA to conclude that the occurrences of hypoglycemia is most likely to be due to skipped snacks. The GA_PDA makes a note to itself to use the opportunity of a query about diet to ask Abby about possible snacks being missed (EA3.1). This scenario shows the software working in a pro-active manner that is quite important in this scenario, since it serves to alert the patient about a problem that he is not conscious about and therefore would never ask about. How would agent-oriented and non-agent-oriented approaches differ on the modeling and analysis of such cases?
QA2        Human Autonomy vs software autonomy- In scenario EA4.1 Abby (a human being) has the autonomy to follow or ignore advices from the GA and to modify the GA-PDA authorization to communicate with her parent’s desktop computer.. How would the software engineer handle this autonomy using this methodology? How does one decide which decisions are to be made at design-time and which at run-time ?
QA3        Autonomy reasoning- The GA_home_computer and the GA_hospital_process agents may not come to an agreement about rescheduling the consultation (EA4.2). Which methodological constructs can support reasoning about this problem? Once agents can have autonomy (GA agents and human agents) how are their behaviours described and analyzed? Where does requirements end and design begin??For example,  how would the agent interaction analysis in [Miles 01] compare with the approach in DESIRE [Jonker 00]?
QA4       Different levels of Abstraction- How does the methodology supports navigating from the abstract levels of reasoning to the concrete one and vice-versa, as in the recursive definitions of agency in Gaia and ADEPT [Jennings 00] or Level 0 and 1 in MESSAGE [Evans 01]?
QA5        Identifying participants in the domain- In scenarios with many participants (e.g., EA1.0, EA2.0 and EA2.1), how can the methodology help identify participants such as the physician and the GA in the patient’s computer? Would the Universe of Discourse model from Cassiopeia [Collinot 96] help?
QA6        Capturing, understanding and registering terminology- Terms like Hypertension in scenario EA1.0 or risk factors in scenario EA1.1 are not common knowledge. How would the methodology help understand the different terms? Does the methodology provide support for using ontologies? Is there any vocabulary control as in Catalysis [D’Souza 99]?
QA7        Domain analysis- GA involves complex social issues, how does the methodology support the modelling and reasoning about the social relationship involved in the above scenarios? How would they represent for example the fact that patient expects to have a plan to monitor his progress established by the physician as in scenario EA2.0? Iglesias [Iglesias 99] mention that Mas-CommonKads [Iglesias 98] has an informal phase for collecting the user requirement using use cases and MSC (Message Sequence Charts) [Regnell 96], can the software engineer benefit from this?  MESSAGE [Evans 01] offers Organisational and task goal view, how much would it help?
QA8        Finding requirements - How does the methodology help in discovering and refining requirements? For example, in MESSAGE [Evans 01] how does one arrive at the organization diagram and the task/goal implication diagrams?
QA9        Human-machine cooperation. The diet and exercise scenario (EA2.4) illustrates how the GA might explore alternatives to help the patient achieve therapy goals while respecting personal preferences and life styles. How does the methodology help identify and analyze cooperative problem solving scenarios?
QA10    Database design   GA implies the use of different and possibly distributed databases for accessing information like drug compliance, diets, exercises programs among others (EA2.2, EA2.3 and EA 2.4). How does the methodology determine the modes of interaction with these databases?
QA11    Database evolution- How would the methodology support the fact that these databases like drug compliance and diets, like those mentioned in EA2.2, EA2.3 and EA2.4 will be continuously evolving (EA0.1), eventually offering new features or even changing signatures and capabilities?
QA12    Database design and legacy- Risk factors for myocardial stroke (EA1.1) probably need to be retrieved from an existing database. How does the methodology support modeling and reasoning about legacy systems?
QA13    Reasoning about different non-functional aspects- Communicating with different databases like diets and exercise program that might be in different servers can be seen in EA2.3 and EA2.4. How does the methodology help determine the appropriate responses from the GA if one or more of the databases cannot be reached? How would these appear in AUML [Odell 00] interaction diagrams?
QA14    Mobility- Many different agents such as GA_hospital_Process ,GA_PDA, GA_Home_Computer and the physicians are depicted in EA4.0, EA4.1 and EA4.2. Many others could be involved like the physicians assistant software that must be aware of changes in patients’ routine, test results and other important information. Moreover, consider that more then one physician may assess one single patient since some cases might demand different expertise. Yet many of these agents may not have an exact location as physicians might also be allowed to use PDAs. Software can also potentially migrate from one host to another. How does the methodology help reason about types and levels of mobility? How much intelligence should each agent contain? At what cost?
QA15   User interface design- Abby’s double dance class scenarios (EA4.0, EA4.1 and EA4.2) indicate that many different softwares will have to interact with different users. How does the methodology lead to interface designs that respect the diversity of users?
QA16   Generating test cases- The “skipped snack” episode (EA3.1) involves many software artifacts interacting with its users and performing different tasks. Can the software engineer benefit from the methodology to generate test cases from models?   Which level of tests case (integration, modular, systemic)? What would be tested in the above scenario?
QA17   User interface Design- The interaction between the GA_PDA and the patient, depicted in EA3.1 can be a key factor for the success or failure of the project. How would the methodology help software engineers on the user interface design?
QA18    Architectural design and reasoning- Scenario EA6.0 portrays an important aspect of the GA, it may never require a “reset” when changes have to be made. How the methodology supports software engineers on modelling aspects like this? For example, DESIRE [Jonker 00] offers component managing how much it would help on that?
QA19    Eliciting and reasoning about Non-Functional aspects- Possible catastrophes are mentioned in scenario EA6.1, how the methodology facilitates the elicitation of such requirements? Can softgoals in i* [Yu 97] and Tropos [Castro01][Perini01] contribute to modeling them?
QA20   Architectural design and reasoning- The need for flexible architectures that can support evolution is prescribed, among other things, in EA7.0, how the software engineer can model and reason about different architectures to support this scenario?
QA21   Architectural design and reasoning- Important aspects like communication costs and reliability are stated in EA6.2. These aspects may have different contributions to different software architecture.  How the methodology would support reasoning about different architectures and the several aspects like the ones mentioned above?
QA22    Validating specification over the life cycle- Security, authentication alerting and notification are mentioned in EA7.0. How the methodology facilitates to validate if such aspects are being met during design, code and in the final product?
QA23    Tracing changes in the requirements into design- - The need for interfacing with a new database containing possible cross-reactions due to the use of a medication is introduced in EA5.0. It also introduces the need for reasoning how it will affect the physician assistant software since drugs prescription assistance would be affected. If scenario EA5.0 is elicited later, during the design phase, does the methodology provides any mechanisms to know where the design will be affected?
QA24    Tracing changes from design to code- Once the software engineer ends reasoning about changes in the design to support scenario EA5.0, does the methodology provide support to find the parts of the code that will be affected?
QA25    Concurrency- In Scenarios EA3.0 and EA3.1 we can see three different softwares (GA-PDA, GA_Home_Computer and GA_Hospital_Process) that will be necessarily operating concurrently, each one taking its own initiatives. How would the methodology support this requirement?
QA26    Tracing back to requirements- Suppose that during a design revision, the software engineer is in doubt if GA should really deal with allergies. Can this be traced back to requirements? How much rationale the methodology provides? Can one find out which stakeholder was responsible for pointing out the need for this requirement?
QA27    Software Modularity- The GA needs to be able to handle interfacing with instruments from several companies as depicted in EA8.0. How the methodology supports work in different groups? Can I buy the software for handling instruments interface from other companies as portrait in scenario EA8.0? For example, can packages from AUML [Odell 00] help on that?
QA28    Formal Verification and Validation- Does the methodology provide any means for formal verification and validation? Can the software engineer validate the different products throughout the software development process? To what extent can inconsistency be tolerated and managed?
QA29   Project Management- Suppose that at some point in the development process the project manager decides to give different directions to the project targeting a smaller number of requirements aiming at a faster delivery (EA9.0). How does the methodology help implement this change?
QA30    Working in distributed teams- Developing a software usually demands that more then one team be used so different goals can be achieved concurrently. The teams must coordinate activities and exchange knowledge and ideas. How easy it is to partitioned the activities?  How would the methodology facilitate it? What supports are there for forking and joining models for example?
QA31   Tool support- How much tool support does the methodology provide?
QA32    Learning curve- How easily can the software engineer learn the methodology and its tools?
QA33    Integration with other methodologies- How easily can the software engineer use other models together with the methodology?

 

 

Detailed Questions, After Deployment

 
 
QB1        Flexibility to evolve / Reuse- How the methodology would handle the evolution of the system portrait in scenario EB1.0? How easily and in which extent can the software engineer reuse existing design, code and test cases? For example, although agent interaction analysis [Miles 01] claims that due to agent’s characteristics used in the framework the alterations would minimally effect the rest of the system, it is not clear how one can be sure of that. It is not clear also how one agent will be affected and how easily it would be to evolve it.
QB2       Project Management- The evolution requested in scenario EB1.0 will of course demand a certain amount of time and money to be made. How much the methodology helps on determining such issues and therefore to evaluate if the changes are feasible or not? Does the methodology help on establishing precise milestones? Can the software engineer establish a critical path in order to manage the project efficiently?
QB3        Database Design- Having to change existing database to comply with scenario EB1.1 might conflict with many of the non-functional aspects present in scenarios EA4.0 and EA4.1, how the software engineer handles that? What are the methodological constructs that can help?
QB4        Can one address even more complex system- Is the methodology able to scale up to a software with the complexity added by scenarios EB2.0 and EB2.1?
QB5        Database design- How the software engineer would handle the need for different schemas to represent the psychological profiles? What are the methodological constructs that might help on that?
QB6         Evolvability and Product Lines- If Abby decides to spend a year abroad as in EB3.2, how would the methodology support changing the GA to comply with new requirements such as variations among national standards?
QB7        Lightweight versions of methodology for simpler problems- How the methodology scales to simpler problems like the one stated in scenarios EB3.0, 3.1 and 3.2? Suppose the software engineer feels comfortable enough in the domain, can he use subsets or shortcuts to avoid overwork?  Can one for example choose not to use Organization diagrams from Message [Evans 01]?