Requirements
Engineering Research at the University of Toronto |
|
Early RE SeminarsRequirements-driven approaches to
Software and Systems Engineering
|
Hosts
Prof. Eric Yu, Faculty of
Information Studies
Prof. Steve Easterbrook, Dept.
of Computer Science
|
Upcoming seminars |
AbstractRE research has produced many tools and techniques, but few have seen adoption---much less widespread adoption---in industry. One cause may be the lack of entry-level systems: when students first encounter RE, it seems like an awfully big hammer for their very small walnuts. This "seminar" will brainstorm meta-requirements and possible designs for a beginner's RE tool.(Note: I have to leave at 10:00 to teach; I really just want to get the discussion going among the grad students, in the hope that ideas will emerge in the coming weeks and months.) |
AbstractChanging requirements constitute one of the greatest risks for large software systems. The only way of keeping track of what the system should be doing is with reference to the customer requirements. However, this is rarely done for a variety of reasons. This talk proposes a formative design and analysis framework for modeling systems with an eye to post-implementation evolution and upgrade. The analysis toolkit is a modified version of i* and CWA, the Cognitive Work Analysis framework. Such a proposal ought to be evaluated for validity and usefulness. What is the nature of requirements change? Does the framework express the issues of interest? The latter portion of the talk focuses on a proposal to evaluate the toolkit longitudinally with respect to these questions. |
AbstractAbstract. Context-aware applications monitor changes in their operating environment and switch their behaviour to keep satisfying their requirements. Therefore, they must be equipped with the capability to detect variations in their operating context and to switch behaviour in response to such variations. However, specifying monitoring and switching in such applications can be difficult due to their dependence on varying contextual properties which need to be made explicit. In this talk, I will present our development on specifying monitoring/switching requirements, highlight the results of our ongoing project. I will also discuss its connection to the monitoring/diagnosing framework. |
Past Events |
Tuesday October 30, 2007
9:30am - 11am BA3234 Again we bring you multiple talks in the coming edition of ERE. The first 2 are practice talks for ER, the third for ASE. Note the new regular time for this term. We need to start promptly at 9:30, so we can end on time. There's a colloquium at 11. A Goal Oriented Approach for Modeling
|
Abstract: In designing software systems, security is typically only one design objective among many. It may compete with other objectives such as functionality, usability, and performance. Too often, security mechanisms such as firewalls, access control, or encryption are adopted without explicit recognition of competing design objectives and their origins in stakeholder interests. Recently, there is increasing acknowledgement that security is ultimately about trade-offs. One can only aim for “good enough” security, given the competing demands from many parties. In this paper, we examine how conceptual modeling can provide explicit and systematic support for analyzing security trade-offs. After considering the desirable criteria for conceptual modeling methods, we examine several existing approaches for dealing with security trade-offs. From analyzing the limitations of existing methods, we propose an extension to the i* framework for security trade-off analysis, taking advantage of its multi-agent and goal orientation. The method was applied to several casestudies used to exemplify existing approaches. |
Abstract: The
service-oriented architecture (SOA) has been emerging as one of the
most popular system architectures in both the business and IT
communities because of its capability in achieving flexibility,
agility, and responsiveness to changing business needs. However, these
values can only be delivered if the business needs and strategic
concepts are properly analyzed and met by the technical solution. The
study of business models, stimulated by innovations in e-business, has
become an important step to support such analysis leading to technical
system design. This thesis examines the business modeling and analysis
needs arising from the business models literature, and considers the
potential of the i* modeling framework [Yu97] in addressing those
needs. A reference catalog approach is proposed to capture recurring
business models and provide design rationales for service-oriented
design. A sample reference catalog is provided. The effectiveness of
the proposed approach is evaluated using a real-world case study.
|
Abstract: Monitoring the satisfaction of software requirements and diagnosing what went wrong in case of failure is a hard problem that has received little attention in the Software and Requirement Engineering literature. To address this problem, we propose a framework adapted from artificial intelligence theories of action and diagnosis. Specifically, the framework monitors the satisfaction of software requirements and generates log data at a level of granularity that can be tuned adaptively at runtime depending on monitored feedback. When errors are found, the framework diagnoses the denial of the requirements and identifies problematic components. To support diagnostic reasoning, we transform the diagnostic problem into a propositional satisfiability (SAT) problem that can be solved by existing SAT solvers. We preprocess log data into a compact propositional encoding that better scales with problem size. The proposed theoretical framework has been implemented as a diagnosing component that will return sound and complete diagnoses accounting for observed aberrant system behaviors. Our solution is illustrated with two medium-sized publicly available case studies: a Web-based email client and an ATM simulation. Our experimental results demonstrate the feasibility of scaling our approach to medium-size software systems. |
AbstractSmall companies form a large part of the software industry, but have mostly been overlooked by the requirements engineering research community. We know very little about the techniques these companies use to elicit and track requirements and about their contexts of operations. This paper presents preliminary results from an ongoing exploratory case study of requirements management in seven small companies, which found that (a) successful small companies exhibit a huge diversity of requirements practices that work well enough for their contexts; (b) these companies display strong cultural cohesion; (c) the principal of the company tends to retain control of the requirements processes long after other tasks have been delegated; and (d) the evidence rejects the simplistic view of a current “software crisis”, as requirements errors for these companies, though problematic, are rarely catastrophic. We develop a number of hypotheses to explain these findings. |
AbstractWe present details of a goal-oriented process for database requirements analysis. This process consists of a number of steps, spanning the spectrum from high-level stakeholder goal analysis to detailed conceptual schema design. The paper shows how goal modeling contributes to systematic scoping and analysis of the application domain, and subsequent formal specification of database requirements based on this domain analysis. Moreover, a goal-oriented design strategy is proposed to structure the transformation from the domain model to the conceptual schema, according to a set of user defined design issues, also modeled as goals. The proposed process is illustrated step-by-step using a running example from the design of a real-world, industrial biological database. |
AbstractRequirements elicitation involves the construction of large sets of conceptual models. An important step in the analysis of these models is checking their consistency. Existing research largely focuses on checking consistency of individual models and of relationships between pairs of models. However, such strategy does not guarantee global consistency. In this paper, we propose a consistency checking approach that addresses this problem for homogeneous models. Given a set of models and a set of relationships between them, our approach works by first constructing a merged model and then verifying this model against the consistency constraints of interest. By keeping proper traceability information, consistency diagnostics obtained over the merge are projected back to the original models and their relationships. The paper also presents a set of reusable expressions for defining consistency constraints in conceptual modelling. We demonstrate the use of the developed expressions in the specification of consistency rules for class and ER diagrams, and i* goal models. |
AbstractIn agile software processes, such as Extreme Programming (XP), units of work are expressed as 'User Stories.' These are casual descriptions of how a user would use the software system in a story format. They are less formal and contain less information than use cases and scenarios, but are a simple and surprisingly effective means for eliciting feature requests from customers, for tracking units of work, and ensuring software quality. Recently, our research group has been investigating novel ways to leverage User Stories in software development. One project is STITCH, an Eclipse plug-in to support requirements traceability and program comprehension by linking User Stories with source code. A second project examines the role of storytelling and narrative structure in User Stories to identify where everyday folk knowledge about stories can be used to facilitate requirements elicitation.From Susan homepage: I'm a scientist. As such, I have two responsibilities. The first is the production of long-lived research results that deepen our understanding of software engineering as a technology that is used by people and as a technology that affects people. The second is training students to become great minds who contribute to society as engaged citizens and as innovative workers who are prepared for the future. It's a vocation that's second to none. My area of research is software engineering, in particular, tools that help software developers understand source code. I take an empirical approach, meaning that I study software developers to determine what technology is needed, how to build the technology, and whether we have built the technology correctly. Much of my research has been concerned with using benchmarking to validate research results and to advance research in scientific communities. Other areas of interest are research methodology, interoperability of reverse engineering tools, and software process for small business. For more information see my pages on research and papers. |
Security,
privacy and governance are increasingly the focus of government
regulations in the U.S., Europe and elsewhere. This trend has created a
"regulation compliance" problem, whereby companies and developers are
required to ensure that their software complies with relevant
regulations, either through design or reengineering. In this seminar,
we examine the challenges of developing tool support for extracting
stakeholder requirements, called rights and obligations, from
regulations. Specifically, we present the Cerno framework for
automatically creating semantic annotations for documents. We then
propose a tool for semi-automatic semantic annotation of concepts that
constitute sources of requirements. These concepts include rights and
obligations. Finally, we present preliminary studies on the evaluation
of the quality of the resulting models, as well as the tools
effectiveness in supporting and complementing manual effort. This is join work with Nicola Zeni, Luisa Mich and John Mylopoulos (University of Trento), Jim Cordy (Queen's University) and Travis Breaux and Annie Anton (University of North Carolina). |
The
User Requirements Notation (URN) is an ITU-T standardization effort
that aims to produce a modelling language for early requirements
engineering activities. URN combines two complementary visual
languages: the Goal-oriented Requirement Language (GRL, based on i* and
the NFR framework) for business goals, non-functional requirements,
alternatives, decisions and rationales, and the Use Case Maps (UCMs)
visual scenario notation for causal flows of behavior superimposed on
architectural components. Various links between these two views are
also supported in URN. In this presentation, I will give an overview of
the notation and of common analysis techniques, namely GRL strategies
and UCM scenarios. I will then introduce more advanced application and
research areas related to URN, including: transformations to Message
Sequence Charts / UML sequence diagrams / test goals, architectural
evaluations, pattern formalization, business process modelling and
management, requirements management and policy compliance, performance
engineering, reverse engineering, and aspect-oriented requirements engineering. Several examples will be illustrated through the use of the jUCMNav open-source Eclipse plug-in (http://jucmnav.softwareengineering.ca/twiki/bin/view/ProjetSEG/WebHome). Biography: Daniel Amyot is Associate Professor at the University of Ottawa, which he joined in 2002 after working for Mitel Networks as a senior researcher in software engineering. His research interests include scenario-based software engineering, requirements engineering, business process modelling, aspect-oriented modeling, and feature interactions in emerging applications. Daniel is Rapporteur for requirements languages at the International Telecommunication Union, where he leads the development of the User Requirements Notation. He has a Ph.D. and a M.Sc. from the University of Ottawa (2001 and 1994), as well as a B.Sc. from Laval University (1992). He is also the father of three soccer-loving children. |
The
importance of UML models in software engineering is increasing.
Inherent to the UML is its lack of a formal semantics, its risk for
inconsistency and completeness defects and the absence of modeling
norms. These properties are sources for poor model quality and defects.
To find out to which extent defects occur and what types of defects
occur in practice we empirically explore the state-of-the-practice of
quality in UML models using a series of industrial case studies.
Additionally we analyzed the effects of defects in UML models in an experiment with 111 students and 48 practitioners. The results show, that defects often remain undetected and cause misinterpretations. Furthermore we will present techniques that we explored to manage UML quality problems: modeling conventions and task-oriented views. Bio – Christian Lange is currently finishing his PhD at the Technische Universiteit Eindhoven (The Netherlands). He received his M.Sc. in 2003 from the Department of Mathematics and Computer Science at the TU Eindhoven. His master's thesis was titled 'Empirical Investigations in Software Architecture Completeness'. His research interests include empirical software engineering, quality of Model Driver Design and UML in development and maintenance, program comprehension, software architecture and evolution. He is initiator of the EmpAnADa project (Empirical Analysis of Architecture and Design Quality) at the TU Eindhoven. He is also the initiator of the MetricView tool. Christian Lange has published more than 15 papers in international journals, conferences, and workshops such as: IEEE Software, ICSE, MoDELS/UML, HICSS, QAOOSE. He serves on organizing committees for workshops such as: QAOOSE (Quantitative Approaches in OOSE), MSM (Model Size Metrics), and BENEVOL. |
Requirements for a software project typically come from many
sources, usually called stakeholders. Agile methods propose capturing
requirements by having a single customer share the same physical space with the
developers. Requirements are written down in highly simplified form, and
whenever a question arises as implementation proceeds, it is immediately
answered by the customer. This seems to work well for particular kinds of
projects. Three obvious characteristics of such projects are that a single
representative customer can speak with authority on behalf of all stakeholders,
that development is a collaborative effort and not adversarial, and that the
size of the group is small enough that the customer can always be available to
any developer with a question – typically a group size of around ten or fewer.
This talk will discuss why agility in some form might be beneficial to military
software projects, what some of the difficulties are, and explore directions
that might lead to more agility. The talk will be illustrated with
examples. Bio: Terry Shepard was born in Winnipeg, in 1941. He received his B.Sc. and M.A. from Queen's University in Kingston in 1962 and 1965 respectively, and his Ph.D. from the University of Illinois in 1969, all in Mathematics. In 1989, he became a Professional Engineer. Since 1984 he has been with the Royal Military College in Kingston, Canada, where he started the software engineering programs in the Department of Electrical and Computer Engineering. Prior to that time, he was Executive Director of the Cable Telecommunications Research Institute in Ottawa (1982-84), Manager of Computers, Communications and Controls for Canada Square Corporation in Toronto (1980-82), and held various positions with the Department of Communications in Ottawa (1969-80), the last as Director of Research Policy. He is a member of the ACM, and is a Senior Member of the IEEE. His research interests are in software engineering. He has published more than 60 refereed papers. He is cross-appointed in the School of Computing at Queen's University in Kingston. |
Over the past years, software architecture researchers have been very active in developing new methods, techniques, and tools to support different activities of software architecture process such as design, documentation, and evaluation. However, majority of them await rigorous empirical assessment. We believe that without systematically accumulating and widely disseminating evidence about the efficacy of different methods, techniques, and tools it would be naïve to expect successful technology transfer. Anecdotal evidence alone, irrespective of the creditability of the source, may not be enough to convince organizations to include a technique in their portfolio and provide training to its employees to use that technique. That is why there is a vital need for gathering and disseminating evidence to help researchers to assess current research, identify the promising areas of research and practitioners to make informed decisions for selecting a suitable method or technique for supporting software architecture process. One of our main research goals is to improve software architecture process by developing and/or empirically assessing various methods, techniques, and tools. To this end, we have been conducting a series of empirical studies using different research approaches and data generation methods and following the principles of evidence-based paradigm. In this talk, I will be discussing some of our studies to demonstrate how we have applied several research approaches and a wide variety of data generation methods to gather evidence. I also discuss how we have used the evidence to assess the outcomes of our research and to guide our ongoing research in software architecture discipline. Bio – Ali Babar is a Senior Research Fellow with Lero, the Irish Software Engineering Research Centre, where he is leading a project on empirical studies in software product lines. Until recently, he was a researcher with the National ICT Australia. He has also been designing and delivering software engineering courses as an adjunct academic since 2000. Previously, he also worked as a software engineer and an IT consultant for several years in Australia. His current research interests include software product lines, Evidence-Based software engineering, software architecture design and evaluation, architectural knowledge management, and process improvement. He received a Ph.D. in software engineering from the University of New South Wales, Australia. |
The i* framework (Yu, 1995) offers a graphical notation for modeling and analyzing socio-technical systems, particularly during early stage requirements engineering for software systems. A number of software tools exist to support i* modeling. Nevertheless, there is a strong need for better visual support in i* modeling. This study aimed to identify areas for improvement and to develop prototypes to address these areas. A review of existing i* tools and observation of users revealed that current tools provide inadequate support for maintaining an overall view of a model, filtering extraneous data and extracting sub-collections of the model data. During observation, modellers constantly rearranged elements to achieve clarity when analyzing relationships between actors and struggled to gain a clear overall view of the model. Prototypes were then developed based on these needs, and an evaluation by users of i* was carried out to test the validity of the concepts and requirements included in the prototypes. User feedback suggested these prototypes would be useful when using software for i* modelling and offered suggestions for further improvements. |
Software
technology now penetrates almost every aspect of our lives in complex
ways. The reality of 21st century software development is that software
itself is but one part of a complex system-of-systems that includes a
broad technological infrastructure along with a wide set of human
activities. The technological systems and the human activity systems
have a symbiotic relationship - each shapes the other in complex
ways, such that neither can be understood in isolation. A recent report
from the SEI on Ultra-Large Scale (ULS) Systems accurately
characterized the nature of these systems-of-systems: they have no
centralized control; experience normal failures and continual evolution
of heterogeneous elements; and their requirements are inherently
conflicting, diverse and often unknowable. For design purposes, the
boundary between people and software disappears - design is as much
about shaping the human activities as it is about constructing the
software. In this talk, I will argue that these challenges are now true of most software development. The engineering approaches we use today for software development only work when we take a very narrow view of the requirements, as well-defined sets of features and interfaces, which can be fully specified. This approach helps us to build components that conform (in a narrow sense) to their specifications. But we cannot tell in advance whether they will be any use in any of the many different systems-of-systems in which they may be deployed. Our engineering techniques rapidly break down when we attempt to scale up our design ambitions. The result is a growing gap between expectations and practice in the software industry. We can build very reliable software at the small scale, for tightly constrained problems. But we cannot build reliable software for complex socio-technical domains. To make progress on these challenges we need to abandon the idea that we can write complete, consistent specifications. Instead, we need to capture the multiple, conflicting requirements for each software component that arise from its different contexts of use. We need to be able to express our partial understandings of the broader systems-of-systems in which our components will be deployed. And we need to be able to reason about the properties, and end-to-end behaviours of these systems, without resolving all the unknowns and inconsistencies in our models. I will end the talk with a survey of recent research in requirements engineering that tackles these challenges. In particular I will discuss techniques for managing large, evolving collections of fragmentary requirements models, and show how it is possible to tolerate inconsistency when we reason about the properties of these models. |
This
is a keynote talk that I gave to the Doctoral Symposium at the
International conference on software engineering in November 2006. The
talk addresses the question of how we should evaluate the claims we
make for our research in software engineering. The central idea I
present in the talk is that the only way to judge a "contribution to
knowledge" is to relate the research to new or existing theories, and
to collect evidence that helps to support or refute these theories.
However, in software engineering, we very rarely articulate our
theories. The talk explores the nature of theories, and surveys a range
of empirical methods that can be used to investigate and/or develop
theories in software engineering. The slides are on the web at: http://www.cs.uoregon.edu/fse-14/docsym_docs/FSE06DocSymp-keynote-v5.pdf |
Terminological interference occurs in requirements engineering when stakeholders vary in the concepts they use to understand a problem domain, and the terms they use to describe those concepts. This paper investigates the use of Kelly's Repertory Grid Technique (RGT) to explore stakeholders varying interpretations of the labels attached to softgoals in a goal model. We associate softgoals with stakeholders personal constructs, and use the tasks that contribute to these goals as elements that stakeholders can rate using their constructs. By structurally exchanging grids data among stakeholders, we can compare their conceptual and terminological structures, and gain insights into relationships between problem domain concepts. |
Pervasive
computing environments need to provide autonomic services by capturing
and adapting to task requirements and operational context of mobile
users. Although some frameworks have been proposed for managing user-centric services adaptation in pervasive computing, there has been little work on examining to what extent the user goals are satisfied. Goal modeling has been viewed as an effective way of revealing, clarifying and refining requirements. It has been applied widely in early-stage requirements elicitation. Some work has also been done in the area of reverse engineering to re-capture goal models from source code or system interface to elicitate the alternatives and goals provided by existing systems. This paper reports an empirical study of the usefulness of goal-oriented requirements engineering (GORE) in designing pervasive applications with user-centric adaptation. We collected user expectations of a smart meeting room by individual interviews and built user goal models by summarizing our findings. We also capture the system goal models for two smart meeting room projects. The usefulness of GORE is valued by comparing system goal models with the user goal model, where system goals denote the quality or functions that are achieved by existing software products and the user goals denote the users’ expectations for a new system. The results show that differences exist between user expectations and system achievements. And it also implies from our study that with earlyphase user goal identification, user intents and alternatives can be identified more completely and accurately, which may help to improve adaptation and user satisfaction for pervasive systems. |
Facilitating
the transfer of knowledge between knowledge workers represents one of
the main challenges of knowledge management. Knowledge transfer
instruments, such as the experience factory concept, represent means
for facilitating knowledge transfer in organizations. As past research
has shown, effectiveness of knowledge transfer instruments strongly
depends on their situational context, on the stakeholders involved in
knowledge transfer, and on their acceptance, motivation and goals. In
this paper, we introduce an agent-oriented modeling approach for
analyzing the effectiveness of knowledge transfer instruments in the
light of (potentially conflicting) stakeholders’ goals. We apply this
intentional approach to the experience factory concept and analyze
under which conditions it can fail, and how adaptations to the
Experience Factory can be explored in a structured way. |
Jennifer Horkoff Department of Computer Science University of Toronto |
Eric Yu Faculty of Information Studies University of Toronto |
Lin Liu School of Software Tsinghua University, Beijing |
As
technology design becomes increasingly motivated by business strategy,
technology users become wary of vendor intentions. Conversely,
technology producers must determine what strategies they can employ to
gain the trust of consumers in order to acquire and retain their
business. As a result, both parties have a need to understand how
business strategies shape technology design, and how such designs alter
relationships among stakeholders. In this work, we use the Trusted
Computing domain as an example. Can the technology consumer trust the
advertised intentions of Trusted Computing Technology? Can the
providers of Trusted Computing gain the trust of consumers? We use the
i* Modeling Framework to analyze the links between strategies and
technologies in terms of a network of social intentional relationships. By applying the qualitative i* evaluation procedure, we probe the intentions behind the strategies of technology providers, facilitating an analysis of trust. |
An
enterprise architecture is intended to be a comprehensive blueprint
describing the key components and relationships for an enterprise from
strategies to business processes to information systems and technologies. Enterprise architectures have become essential for managing change in complex organizations. While “motivation” has been recognized since Zachman [1] as an important element of enterprise architecture, yet to date, most enterprise architecture modeling only deals with structure, function, and behaviour, neglecting the intentional dimension of motivations, rationales, and goals. The contribution at hand explores this challenge and aims to illustrate the potentials of intentional modeling in the context of enterprise architecture. After introducing two intentional modeling languages and their relation to the enterprise architecture construction process, we report on an explorative case study that aimed to investigate the practical implications of intentional modeling and analysis for enterprise architectures. Finally, we present key observations from interviews that were conducted with practitioners to obtain feedback regarding the material developed in the case study. |
Hierarchical
modeling is a crucial means for organizing and understanding large
models of requirements and software architecture. ADORA is a language and tool for integrated hierarchical object modeling that has been developed in my research group at the University of Zurich. The main features of ADORA that distinguish it from other approaches are the use of abstract objects instead of classes as the basis of the model, a systematic hierarchical decomposition of the model and the integration of all aspects (structure, behavior, interaction,...) in one coherent model. While in UML, hierarchical structure has been added later to a conceptually flat set of languages, ADORA has been designed around the principles of hierarchy and integration of modeling aspects. Hierarchical models are also a challenge for tool builders – at least if one wants to go beyond simple scrolling and explosive zooming. In ADORA, we have developed a novel visualization concept for object hierarchies. It is based on logical fisheye views and enables the visualization of global context and local detail in the same view. In order to make this concept work, novel zooming and line routing concepts have been developed. In my talk, I will first give an overview of the constituent features of the ADORA language. I will then discuss the elements of our visualization concept, in particular context-preserving zooming and smart line routing. The ADORA tool prototype will be demonstrated. Biography: Martin Glinz is a full professor of Computer Science and head of the Requirements Engineering Research Group at the University of Zurich. His primary research interests are methods and tools for requirements modeling. He has also worked in requirements-based testing. In teaching, he covers the field of software engineering, including quality management. He received a diploma degree in Mathematics and a Dr. rer. nat. in Computer Science from Aachen Technical University, Germany. Prior to joining the University of Zurich in 1993, he was with BBC/ABB, where he was active in software engineering research, development, training, and consulting for ten years. Martin Glinz is on the editorial board of the Requirements Engineering Journal and the Software and Systems Modeling Journal. He has been Program Chair of this year’s IEEE International Requirements Engineering Conference. He is a member of the executive board of the Special Interest Group on Software Engineering of the Swiss Informatics Society. He is deputy chair of the Department of Informatics at the University of Zurich. From 2000 to 2006, he was Vice Dean of the Academic Program in Informatics. In this period, he was the lead architect of two major revisions of the curriculum. |
Recently, there has been a growing interest in the
Agent-oriented paradigm to cope with the needs imposed by nowadays complex and
networked systems. Developing Multi-Agent Systems (MAS) calls for addressing
aspects such as interaction, autonomy, collaboration and pro-activeness. One
way to cope with these needs is to have agency properties as well as
intentionality in the center of the software development process. In this work
a proposal is presented to bring intentionality and agency properties to the
early stages of software development. The proposal is based on Strategic
Dependency Situations (SDsituations) as a simple technique for helping
requirements elicitation. An SDsituation is a set of interdependent strategic
dependencies. We propose to use SDsituations to facilitate elicitation during
early stage RE, together with LEL and scenarios, leading up to i* models. SDsituations
offer some benefits in validation because they can be shown using one readable
representation and can be customized applying more than one viewpoint
(dependers or dependees viewpoints). Moreover, SDsituations can help the
Requirements’ Management keeping the traceability and one baseline to register
the requirements evolution. |
We
present a variability-intensive approach to goal decomposition, that is
tailored to support requirements identification for highly customizable
software. The approach is based on the semantic characterization of
OR-decompositions of goals. We first show that each high-level goal can
be associated with a set of concerns, in response to which, alternative
refinements of the goal can be introduced. A text corpus relevant to
the domain of discourse can be used to derive such variability concerns
that are specific to the problem. In parallel, contextual facts that
can vary while a goal is being fulfilled are modeled. Then, a
high-variability goal model is constructed aiming at responding to the
predefined variability concerns completely, while contextual factors
are used to test whether it addresses all realistic background
circumstances. The approach was applied in a study from the geriatric
health care domain. |
In
organizations, knowledge is more and more regarded to be an intangible asset,
that needs to be actively managed and developed. In the research domain of
knowledge management, modeling knowledge work represents one critical fundament
for understanding problem domains, identifying knowledge-based improvements and
designing supportive systems. However, questions concerning the focus, scope
and complexity of modeling efforts in these contexts make modeling intiatives a
pressing practical- and a challenging research problem. This presentation
introduces aspects of the B-KIDE Framework and Tool for business process
oriented development of knowledge infrastructures. Based on a meta-model, the
framework and the accompanying tool aids the interview-based identification and
subsequent visualization of existing knowledge processes that run within and
across business processes. By reengineering these knowledge processes, the
framework enables the alignment of supportive knowledge management
initiatives/instruments to the most value generating activities of
organizations. Markus
Strohmaier is a post-doctoral research fellow at the Department of Computer
Science at University of Toronto. He is also affiliated with the Know-Center
Graz, Austria's competence center for knowledge-based systems and applications.
He holds a PhD in computer science from the University of Technology, Graz
(Austria). His interests cover enterprise information systems, knowledge
technologies and conceptual modeling. |
Developing Multi-Agent Systems (MAS) calls for
addressing different concerns. Some of them are general and related to the
technology and others are particular to each collaborating agent. Our proposal
aims to provide a more holistic approach to the construction of MAS. Integrating
three different perspectives for modeling information we achieved a more
comprehensible way of dealing, early on, with the different concerns that are
akin to MAS. In this paper we report our initial findings in integrating these
three perspectives. We have used a known exemplar, the Expert Committee, to
illustrate the advantage of dealing with these three different perspectives. Our
contribution relies on tackling different concerns in an integrated manner
during the definition (requirements) of a MAS
development. |
Motoshi Saeki wrote: “In Ms Itakura will also talk about
highlights from her own research – details below. 1. A
study on finding technical influencers using text mining tool (Chance Discovery
tool). 2001-2003 |
Inter-organizational networks, which are comprised of human
actors, information and communication systems, are often described by the
interplay of the various goals and activities as well as their
strategic interdependencies. The goal of this project is to expand on the
idea of static requirements analysis for such networks by developing an
online collaborative decision-support tool for trust-based cooperation
networks. To this end we are making use of a slight extension of i* and a
mapping from i* into executable programs in the action language
Congolog. In this talk I will give an overview of the current state of the
project. Biography
Gerhard Lakemeyer is Associate Professor of Computer Science and head of the knowledge-based
systems group at the Aachen University of Technology, Aachen, Germany. He is also affiliated with the information
systems group. His research interests are in
Artifical Intelligence, Knowledge Representation, and Cognitive Robotics. His publications include The
Logic of Knowledge Bases (MIT Press, 2001), with Hector Levesque, and Exploring
AI in the New Millennium (Morgan Kaufmann Publishers, 2002), co-edited with Bernhard Nebel. |
Autonomic computing systems reduce software maintenance costs and management complexity by taking on the responsibility for their configuration, optimization, healing, and protection. These tasks are accomplished by switching at runtime to a different system behaviour ?the one that is more efficient, more secure, more stable, etc. ?while still fulfilling the main purpose of the system. Thus, identifying and analyzing alternative ways of how the main objectives of the system can be achieved and designing a system that supports all of these alternative behaviours is a promising way to develop autonomic systems. This paper proposes the use of requirements goal models as a foundation for such software development process and sketches a possible architecture for autonomic systems that can be built using the this approach. |
Highlights from and discussions about what happened at these conferences in early September. |
The use of
Viewpoints has long been proposed as
a technique to structure evolving requirements models. In theory,
viewpoints should provide better stakeholder traceability, and the
ability to discover important requirements by comparing viewpoints.
However, this theory has never been tested empirically. This paper
reports on an exploratory case study of a key hypothesis of the
viewpoints theory, namely that by creating separate viewpoint models to
represent different stakeholder contributions, and explicitly merging
them, important hidden requirements can be discovered. The case study
compared two modelling teams using the i* notation to capture
requirements for new webbased counselling services for a large
charitable organisation. One team used viewpoints; the other did not.
The conclusions include that viewpoint merging does reveal important
differences in concepts described by different stakeholders, but is
extremely time consuming. The study also pointed to the need for better
model management tools, as both teams encountered difficulty in
managing large, evolving models. |
View merging,
also
called view integration, is a key problem in conceptual modeling. Large
models are often constructed and accessed by manipulating individual
views, but it is important to be able to consolidate a set of views to
gain a unified perspective, to understand interactions between views,
or to perform various types of end-to-end analysis. View merging is
complicated by inconsistency of views. Once views are merged, it is
useful to be able to trace the elements of the merged view back to
their sources. In this paper, we propose a framework for merging
incomplete and inconsistent graph-based views. We introduce a
formalism, called annotated graphs, which incorporates a systematic
annotation scheme capable of modeling incompleteness and inconsistency
as well as providing a built-in mechanism for stakeholder traceability.
We show how structure-preserving maps can capture the relationships
between disparate views modeled as annotated graphs, and provide a
general algorithm for merging views with arbitrary interconnections. We
use the i* modeling language as an example to demonstrate how our
approach can be applied to existing graph-based modeling languages. |
Knowledge engineering is
a
complex task. Large knowledge models consist of tens or hundreds
of thousands of concepts, interconnected in many ways.
Furthermore, every stakeholder in the modeling process -
engineer, modeler, doctor, patient, manager - has a different
Weltanschauung or world-view with which they approach the model.
A final difficulty is that many tools are research-quality, not
production-quality, and oriented to the knowledge engineer, not modelers or users. In this talk I present a framework for understanding how to create advanced visual interfaces for such tools, based on contextual inquiries conducted at two organizations, as well as literature analysis and surveys. I used this framework to suggest improvements to a particular tool that attempted to provide cognitive support for modelers using Protege (http://protege.stanford.edu). I demonstrate this tool, Jambalaya (http://www.thechiselgroup.org/jambalaya), and conclude with some remarks concerning tools of interest to the RE group here at U of T. These results are from research I conducted at the University of Victoria as member of the CHISEL group (http://thechiselgroup.org) . http://neilernst.net/docs/pubs/neil-thesis-final.pdf |
Active
research is being done in how to go from requirements to architecture.
However, no studies have been attempted in this area despite a long
history of empirical research in software engineering (SE). Our goal is
to establish a framework for the transformation from requirements to
architecture on the basis of a series of empirical studies. The first
step is to collect evidence about practice in industry before designing
relevant techniques, methods and tools. As part of this step, we use an
interview-based multiple-case study with a carefully designed process
of conducting the interviews and of preparing the data collected for
analysis while preserving its integrity. In this paper, we describe the
design of this multiple-case study, delineate the evidence trail,
discuss validity issues, outline the data analysis focus, discuss meta
issues on evidence-based SE particularly on combining and using
evidence, describe triangulation approaches, and present two methods
for accumulating evidence. http://www.cs.toronto.edu/~wl/papers/2005/rebse2005liu.pdf
|
Information Retrieval (IR) techniques provide an efficient approach to generate potential links between documents and other types of media. Requirements tracing can be framed as an information retrieval problem. We implement the requirements tracing process using the vector space IR model, and perform a case study by applying our tool to NASA’s User Spacecraft Clock Calibration System (USCCS). The results show that most potential links between the requirements and the design specifications are generated efficiently. However, the limitations in recall and precision are also obvious. Based on the analysis of this case study, we identify some factors that significantly affect the results of IR-based requirements tracing, and propose a number of ways for boosting and balancing recall and precision without loss of efficiency. |
Interviews with stakeholders can be a useful method for identifying user needs and establishing requirements. However, interviews are also well known as being problematic. They are time consuming and may result in insufficient, irrelevant or invalid data. Our goal is to reexamine the methodology of interview design and propose interview techniques specific to the field of requirements engineering that support the elicitation of productive data. We examined a Web conferencing system used by a support group for spousal caregivers of people with dementia. We look at two sets of interviews that were created via two different approaches and compare the participants' responses to each format. A thorough analysis of the interview context and its relationship with interview design gives insight into the techniques that may be optimal under certain conditions. In our investigations we conduct a risk analysis of the context. As a result of what we learned we propose a framework to help analysts design interviews and chose tactics based on the context of the elicitation process. We call this the contextual risk analysis framework. |
A reverse engineering process aims at
reconstructing high-level abstractions from source code. This
paper presents a novel reverse engineering methodology for recovering
requirements goal models from
both structured and unstructured legacy code. The methodology
consists of the following major steps: 1) Refactor source code by extracting methods based on comments; 2) Convert the refactored code into an abstract structured program through statechart refactoring and hammock graph construction; 3) Extract a goal model from the structured program abstract syntax tree; 4) Identify non-functional requirements and derive softgoals based on the traceability between the code and the goal model. To illustrate this requirements recovery process, we refactor requirements goal models from two legacy software code bases: an unstructured Web-based email in PHP (SquirrelMail) and a structured email client system in Java (Columba). |
We will have a discussion and collect ideas
about tools to support i* modelling. A major issue will be
various forms of interoperability among the various tools that have
sprung up from different research groups. If you have used OME,
OpenOME, or Visio or other drawing tools to do i* or NFR modeling,
please come and contribute ideas - wishlists, as well as suggestions
for software architectural design, UI, visualization design,
interchange file formats, etc. We will also give a summary of the discussions at the i* workshop last week in London. The presentation files will soon be available in a password protected directory. |
Software requirements consist of a list of desirable functions to be accommodated by proposed software system. Through goal-oriented requirements engineering, stakeholder goals are analyzed into goal models that concisely define a space of alternative sets of functional requirements. We adopt this framework and propose a systematic generation of generic software designs that can accommodate ALL alternatives for the fulfillment of these stakeholder goals. In this paper, we enrich goal models with light-weight annotations to generate three views of high-variability software design views: feature models, statecharts, and componentconnector models. Our process has been applied to an extended study on the meeting scheduling problem to derive an initial high-variability design for the system-to-be. |
We investigate the personalization capabilities of common personal software systems. We use a typical e-mail client as an example of such a system, and examine the configuration screens it offers to its users. We discover that each configuration value reflects each of the ways with which the user goals can be satisfied. Thus, we construct a goal model in which alternative ways for satisfying high level goals are matched with alternative system configurations. This way, automatic configuration of the system by reasoning about the overlaying goal model can be achieved. We find that the vast majority of the configuration options that refer to system functionality can be configured using this method, facilitating thereby the personalization tasks for users with no technical background, and ensuring, at the same time, consistency and meaningfulness in the configuration result. |
OpenOME is a
goal/agent/aspect-oriented
requirements engineering tool. In this talk, we explain the current
development of the OpenOME, resulting from reengineering the legacy OME
tool.We explain the unique
features of OpenOME and the improvements on its usability,
extensibility and interoperability, etc. Currently OpenOME supports
advanced research topics such as qualitative/quantitative goal
analysis, ontology-based queries, requirements knowledge reuse. In the
near future, it will support Web-service based editing, weaving of
requirements goal aspects, discovery and application of requirements
patterns, viewpoint extraction and applications, etc. OpenOME is 100%
open-source and any contributions are welcome. At the
end, of course, we show how you can get involved and contribute. References http://www.cs.toronto.edu/~yijun/OpenOME.html http://sf.net/projects/openome |
Goal and
agent oriented modelling
techniques
represent one of the major accomplishments of the Requirements
Engineering research community. As modelling frameworks cross the
boundaries between
research phase prototypes and application in complex real life
projects, scalability challenges, which could not be easily anticipated
through
simple example application, surface. Models, like software products, have their own lifecycle. They go through an initial development phase, are then validated by stakeholders and finally used for reasoning purposes by the analysis team. I argue that each phase entails its own scalability challenges and requires specific mechanisms to address them. In this talk I propose an iterative, "one concern at a time" analysis approach and present a set of model decomposition concepts based on the i* model topology and the analysis question at hand, to address complex model reasoning challenges. Additionally, I will offer our insights and practical experience results in dealing with complex model development and validation difficulties. |
Software
estimation research is normally
concerned with designing models and techniques that help estimators
reach accuracate effort calculations. However, since software
estimation involves human
judgment, we should also consider cognitive and social factors that
have been observed in psychological studies on judgment. A potentially
relevant factor is the cognitive bias of anchoring and adjustment. It
takes place if, when attempting to respond a complex question, the
respondent is given an initial, possible -though quite likely
incorrect- answer. The respondent seems to adjust it internally to
reach a more plausible answer, but the adjustment is frequently
insufficient. In this talk I will present the results of an experiment about anchoring and adjustment in software estimation. Results show that anchoring and adjustment changes the outcome of software estimation processes. They also suggest that software estimators tend to be have too much confidence on their own estimations. |
Kids Help
Phone is a non-for-profit
organization
dedicated to providing free, anonymous and confidential counseling to
kids in need, through a phone hotline and the internet. The nature of
the organization and the
number and type of stakeholders involved are ideal to perform a
strategic requirements analysis. Our team carried out such analysis by
modelling
the domain knowledge with i* using two parallel approaches: a
viewpoints method and a global model approach. The resulting models, in
terms of scale, are probably the greatest test for i* performance so
far. Although this project is still a work in progress we have already
found valuable insights, such as differences in results by following
viewpoints/global approaches, mechanisms to extract relevant findings
from very large models and to detect and negotiate conflicting
information while merging viewpoints. Present challenges our team faces
include finding ways to reduce the effort to obtain relevant results
from these large models, as well as exploring how to validate and
present these models to the organization. |
In today's ever-changing socio-economic environment, organization and the embedded information system need to evolve as an organic whole on a continuous basis to adapt to new business requirements. In order to guide the coevolution of organization and information system, this paper introduces Tropos Evolution Modeling Process for Organizations (TEMPO). The conceptual framework of this model is grounded on analogies between information system, socio-economic system, and living system; agent-orientation is applied as an overarching paradigm that aligns the three domains. In particular, by interpreting Kauffman's NKC model, which was intended to simulate the coevolution of species in an ecosystem, with Tropos ontology, we introduce the concept of goal interface as the evolution frontier of an organization. Within this interface, evolution is viewed as a process of negotiation between agents on goals both within and beyond the original organizational boundary. The organization is re-stabilized when agreements are reached on the relations between goals. In order to assist in the identification and resolution of goal interactions, a goal relation taxonomy and corresponding negotiation strategies are presented. TEMPO is illustrated with a real-life case study, which demonstrates how to evolve an online retail website under the new European e-commerce legislation. |
During the
early-phase
requirements engineering, it is important to
understand the strategic concerns.
However, a model without sufficient knowledge
representation may be
insensitive to
the difference in organization and system
settings. Therefore, there
exists a
dilemma with respect to strategic
modeling and adequate knowledge
representation. That is, presenting all data
needed for adequate knowledge
representation can hinder strategic modeling and analysis. A metamodel f or security offers one solution to this dilemma. The metamodel facilitates the modeling of security at the action level, which adequately represents the security knowledge. The i* strategic modeling can be achieved by translating the security knowledge base into the intentional-level elements based on a set of axioms. Consequently, strategic modeling and analysis can be ensured while preventing the unnecessary details from hindering the modeling and analysis. In fact, the strategic modeling can also be greatly facilitated by the security metamodel because adequate knowledge representation can ensure a proper adaptation of the strategic model to different organization and system settings. Consequently, better and more accurate strategic models can be developed based on the security metamodel. The metamodel is sufficiently flexible to allow different levels of granularity. The requirements engineer can use the metamodel to represent high-level concepts and perform coarse- grained modeling and analysis or to represent low-level concepts for finer modeling and analysis. The security metamodel is developed based on the System-Theoretic MLS-Based Inferential (STMI) model, an access control model for socio-technical systems based on system theory. The rationales for developing the STMI model are discussed. In addition, the issues pertaining to integrating security into system development are also discussed. |
This thesis proposes an extension to the i* framework to address scalability issues. The notion of "view" is exploited to selectively present portions of an i* "baseline model", which contains all modeled elements for a given application using i* notations. We first reformulate the i* framework and define four types of views-Actor Class, Strategic Dependency, Strategic Rationale, and Evaluation Results. Next, we define sub view types based on the four types of views and supply a view management framework. The views and sub-views are defined using meta-models, and formalized using the Telos conceptual modeling language. Each view type is associated with a formally defined "selection rule" so that the projection of a specific view from a baseline model can be automated. Relationships among views are depicted in View Maps. Illustrative examples are taken from the London Ambulance Service and the Trusted Computing Group case studies. |
Producing software architectural design still poses many challenges in software engineering. I hypothesize that requirement structuring and refactoring provide means to identify subsystem decompositions as part of the architectural design. This paper outlines my doctoral research in validating the hypothesis and building methods to achieve it. Current progress and evaluation methods are described. |
In
constructing a web service system based
on a
business idea, many implementation alternatives are possible that each
has to be explored in order to find the alternative that is most likely
to be successful. A web service system, which is able to successfully
operate in a business setting, is based on a business idea consisting
of a value model that offers a sound value proposition to all actors
involved. In a service-oriented system, like a web service system,
actors (services) have specific goals and depend on each other for
reaching those goals. Therefore, a business-oriented web service
exploration process should be both goal-oriented and value-based. Our
approach, we describe in this paper, combines the goal-oriented i* and
value-based e3-value frameworks in exploring web service ideas from a
business perspective. The impact of profitability issues on other
functional and non-functional requirements issues of an alternative,
visualized using i* modeling constructs, is determined by using
e3-value profitability evaluation results in analyzing i* models. Our
approach contains an iterative exploration process where knowledge
gained by evaluation infeasible alternatives is used in devising new
alternatives. We make use of various viewpoints to reveal a
multi-perspective insight in the viability of alternatives at business
value, business process and information system level. A real-life case
study of the digital music value chain in internet
radio is used to illustrate. |
Aspect-oriented programming (AOP) has been attracting much attention in the Software Engineering community by advocating that programs should be structured according to programmer concerns, such as ``efficient use of memory''. However, like other programming paradigms in their early days, AOP hasn't addressed yet earlier phases of software development. In particular, it is still an open question how one identifies aspects early on in the software development process. This paper proposes an answer to this question. Specifically, we show that aspects can be discovered during goal-oriented requirements analysis. Our proposal includes a systematic process for discovering aspects from relationships between functional and non-functional goals. We illustrate the proposed process with a case study adapted from the literature. |
AOSE
methodologies and models borrow
various
abstractions and concepts from the organization and sociology
disciplines. Although they all view multi-agent system as organized
society, the
organizational abstractions, assumptions, concepts, and models in them
are actually used in different ways. It is, therefore desirable to have
a systematic way of analyzing and comparing the organizational and
social concepts in AOSE. The contribution of this paper is twofold.
Firstly, we describe and define the modeling construct levels and the
social premises of multi-agent system that should be modeled and
analyzed when developing multi-agent system, identify and classify
categories of organizational and social concepts in AOSE literature
that are used to deal with them from standpoints of organization
abstractions. Secondly, we analyze some methodologies and models in
AOSE, explain how the organizational and social concepts are used to
specify and analyze multi-agent system with various social premises in
different levels. |
As the
problems that software systems are
used to solve grow in size and complexity, it becomes harder and harder
to analyze these problems and come up with system requirements
specifications using informal requirements engineering approaches.
Recently, Goal-Oriented Requirements Engineering (GORE), where
stakeholder goals are identified, analyzed/decomposed and then assigned
to software components or actors in the environment, and Agent-Oriented
Software Engineering (AOSE), where goals are objectives that agents
strive to achieve, have been gaining popularity. Their reliance on
goals makes GORE and AOSE a good match. A number of goal-oriented
approaches include formal components that allow for rigorous analysis
of system properties. However, they do not support reasoning about the
goals and knowledge of agents. This thesis presents an agent-oriented
requirements engineering approach that combines informal i* models with
formal specifications written in the CASL language. CASL’s support for
agent goals and knowledge allows for formal analysis of agent
interactions, goal decompositions, and epistemic feasibility of agent
plans. Intentional Annotated Strategic Rationale (iASR) diagrams based
on the SR diagrams of i* are proposed in this thesis, together with the
mapping rules for creating the corresponding formal CASL
specifications. A methodology for the combined use of i* and CASL is
proposed and applied to a meeting scheduling process specification. |
The purpose
of this tutorial is to
delineate and illustrate the correct use and interpretation of case
studies. It will help software engineers identify and avoid common
mistakes by giving them a solid grounding in the fundamentals of case
studies as a research method. Using an equal blend of lecture and
discussion, it aims to provide software engineers with a foundation for
conducting, reviewing, and reading case studies. For researchers, this
tutorial will provide a starting point for learning how to conduct case
studies. They will be able to find, assess, and apply appropriate
resources at their home institution. For
reviewers, the tutorial will provide guidance on how to judge the
quality and validity of reported case studies. They will be able to use
the criteria presented in this tutorial to assess whether research
papers based on case studies are suitable for publication, allowing
them to raise the quality of publications and give appropriate feedback
to authors. For practitioners, the tutorial will provide a better
awareness of how to interpret the claims made by researchers about new
software engineering methods and tools. Practitioners will also gain
deeper insights into the roles they can play in designing and
conducting case studies in collaborative research projects. As well,
they will read case studies
more effectively and be better able to identify results suitable for
use in their workplace. Biographies: Dewayne E. Perry is a Professor and the Motorola Regents Chair of Software Engineering at The University of Texas at Austin. Susan Elliott Sim is an Assistant Professor at University of California, Irvine. Steve Easterbrook is an Associate Professor at University of Toronto. All three tutors have extensive experience in software engineering case studies. |
Business/IT
alignment is a key management
issue
and has been largely investigated. Requirement engineering now deals
with goal modeling and enterprise modeling; it addresses the "function
integration" but
doesn't explicitely addresses the "strategic fit". The assumption
of this talk is that making explicit the business model can
contribute to improving the business/IT alignment. The role of a
formally defined business model is outlined. We propose a Business
Model Ontology to formulate, understand, analyse and share a
company's business model when designing information systems. Moreover
information systems supporting environmental scanning and
technology assessment, which are of prime importance for the
evolution of the business and the IT, are yet less investigated. This
talk also aims at proposing a preliminary theoretical framework
for assessing a technology environment from its properties such
as complexity, uncertainty and disruptiveness. Biographies: Dr Yves Pigneur is professor of information systems in the University of Lausanne (UNIL). He has a Ph.d from the University of Namur in Belgium. In 1994, he was visiting professor in the IS department of Georgia State University (Atlanta) and the Hong Kong University of Science and Technology. In 2004, he is visiting the IS department of the University of Bristish Columbia in Vancouver. His interests cover information system design, requirement engineering, management of information technology and e-business. |
Based on
interviews with a number of
architects and managers from a wide range of organizations, we
characterize how architecture is perceived in practice. We identify
three groups of organizations that differ with respect to their level
of architectural thinking and the alignment of business and IT on
architectural issues. Analysis of the interviews further indicates that
these three groups differ in the architecture aspects and critical
success factors they emphasize. Our results provide a starting point
for assessing architecture maturity and alignment within organizations,
and can be used to help harmonize different architectural tunes played
within organizations. |
Bas van der
Raadt presented ongoing
work
in
coupling i* modeling with the e-business modelling framework called
e3-value. The latter has some connection with the scenario modelling
language UCM (Use Case Maps). |
Trusted
Computing refers to the hardware
and
software technologies proposed and currently being implemented by the
various members of the Trusted Computing Group (TCG). The TCG
consists of
thirty-five members including Microsoft, IBM, Intel, HP, AMD, Nokia,
and Sony. Trusted Computing uses technical means such as platform
configuration checks, separate and secure kernels in both the OS and
applications, protective keys using complex cryptography, etc. . Proponents of Trusted Computing claim it will: - Promote security for the average user by protecting personal information and PC control from malicious users. - Protect against such things as identity theft. - Promote secure online transactions. - Benefit Corporate Users due to the increased security, and specific document access control. . Opponents of Trusted Computing claim it will: - Take away the security and privacy of the PC user by giving control of their PC to the PC Manufacturers and Service Providers as well to other potential parties such as the government. - Increase market shares for TCG Member Companies. - Combat software piracy and further implement Digital Rights Management (DRM). This case study illustrates some of the scalability challenges for i* models, and raises issues about how i* modeling can be extended to deal with multiple viewpoints. |
In the talk,
I'll be presenting a
framework
that supports the formal verification of early requirements
specifications. The framework is based on Formal Tropos, a
specification language that adopts primitive
concepts of i* for modeling early requirements (such as actor, goal,
strategic dependency), along with a rich temporal specification
language. I'll
also show how existing formal analysis techniques, and in particular
model checking, can be adapted for the automatic verification of Formal
Tropos specifications. These techniques have been implemented in a
tool, called the T-Tool, which maps Formal Tropos specifications into a
language that can be handled by the NuSMV model checker. Finally, the
methodology will be evaluated on a course-exam management case study.
Our experiements show that formal analysis reveals gaps and
inconsistencies in early requirements specifications that are by no
means trivial to discover without the help of formal analysis
tools. |